00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2022 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3287 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.574 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.588 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.603 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.603 > git config core.sparsecheckout # timeout=10 00:00:04.616 > git read-tree -mu HEAD # timeout=10 00:00:04.634 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:04.659 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:04.660 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:04.750 [Pipeline] Start of Pipeline 00:00:04.763 [Pipeline] library 00:00:04.765 Loading library shm_lib@master 00:00:04.765 Library shm_lib@master is cached. Copying from home. 00:00:04.780 [Pipeline] node 00:01:41.361 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.363 [Pipeline] { 00:01:41.376 [Pipeline] catchError 00:01:41.377 [Pipeline] { 00:01:41.393 [Pipeline] wrap 00:01:41.405 [Pipeline] { 00:01:41.428 [Pipeline] stage 00:01:41.430 [Pipeline] { (Prologue) 00:01:41.634 [Pipeline] sh 00:01:41.919 + logger -p user.info -t JENKINS-CI 00:01:41.939 [Pipeline] echo 00:01:41.941 Node: CYP12 00:01:41.950 [Pipeline] sh 00:01:42.254 [Pipeline] setCustomBuildProperty 00:01:42.269 [Pipeline] echo 00:01:42.271 Cleanup processes 00:01:42.278 [Pipeline] sh 00:01:42.564 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.564 1588753 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.583 [Pipeline] sh 00:01:42.870 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.870 ++ grep -v 'sudo pgrep' 00:01:42.870 ++ awk '{print $1}' 00:01:42.870 + sudo kill -9 00:01:42.870 + true 00:01:42.885 [Pipeline] cleanWs 00:01:42.896 [WS-CLEANUP] Deleting project workspace... 00:01:42.896 [WS-CLEANUP] Deferred wipeout is used... 00:01:42.903 [WS-CLEANUP] done 00:01:42.908 [Pipeline] setCustomBuildProperty 00:01:42.923 [Pipeline] sh 00:01:43.207 + sudo git config --global --replace-all safe.directory '*' 00:01:43.307 [Pipeline] httpRequest 00:01:43.327 [Pipeline] echo 00:01:43.329 Sorcerer 10.211.164.101 is alive 00:01:43.339 [Pipeline] httpRequest 00:01:43.344 HttpMethod: GET 00:01:43.344 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:01:43.345 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:01:43.348 Response Code: HTTP/1.1 200 OK 00:01:43.348 Success: Status code 200 is in the accepted range: 200,404 00:01:43.349 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:01:43.492 [Pipeline] sh 00:01:43.778 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:01:43.796 [Pipeline] httpRequest 00:01:43.813 [Pipeline] echo 00:01:43.815 Sorcerer 10.211.164.101 is alive 00:01:43.824 [Pipeline] httpRequest 00:01:43.830 HttpMethod: GET 00:01:43.830 URL: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:01:43.831 Sending request to url: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:01:43.833 Response Code: HTTP/1.1 200 OK 00:01:43.834 Success: Status code 200 is in the accepted range: 200,404 00:01:43.834 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:01:46.271 [Pipeline] sh 00:01:46.578 + tar --no-same-owner -xf spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:01:49.923 [Pipeline] sh 00:01:50.204 + git -C spdk log --oneline -n5 00:01:50.204 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:01:50.204 89648519b bdev/compress: Output the pm_path entry for bdev_get_bdevs() 00:01:50.204 a1a2e2b48 nvme/pcie: add debug print for number of SGL/PRP entries 00:01:50.204 8b5c4be8b nvme/fio_plugin: add support for the disable_pcie_sgl_merge option 00:01:50.204 e431ba2e4 nvme/pcie: add disable_pcie_sgl_merge option 00:01:50.222 [Pipeline] withCredentials 00:01:50.232 > git --version # timeout=10 00:01:50.244 > git --version # 'git version 2.39.2' 00:01:50.262 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:50.264 [Pipeline] { 00:01:50.273 [Pipeline] retry 00:01:50.275 [Pipeline] { 00:01:50.294 [Pipeline] sh 00:01:50.579 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:52.505 [Pipeline] } 00:01:52.535 [Pipeline] // retry 00:01:52.541 [Pipeline] } 00:01:52.562 [Pipeline] // withCredentials 00:01:52.576 [Pipeline] httpRequest 00:01:52.620 [Pipeline] echo 00:01:52.622 Sorcerer 10.211.164.101 is alive 00:01:52.631 [Pipeline] httpRequest 00:01:52.638 HttpMethod: GET 00:01:52.639 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:52.668 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:52.668 Response Code: HTTP/1.1 200 OK 00:01:52.669 Success: Status code 200 is in the accepted range: 200,404 00:01:52.669 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:53.884 [Pipeline] sh 00:01:54.170 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:56.095 [Pipeline] sh 00:01:56.380 + git -C dpdk log --oneline -n5 00:01:56.380 caf0f5d395 version: 22.11.4 00:01:56.380 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:56.380 dc9c799c7d vhost: fix missing spinlock unlock 00:01:56.380 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:56.380 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:56.390 [Pipeline] } 00:01:56.406 [Pipeline] // stage 00:01:56.415 [Pipeline] stage 00:01:56.417 [Pipeline] { (Prepare) 00:01:56.439 [Pipeline] writeFile 00:01:56.457 [Pipeline] sh 00:01:56.739 + logger -p user.info -t JENKINS-CI 00:01:56.752 [Pipeline] sh 00:01:57.038 + logger -p user.info -t JENKINS-CI 00:01:57.051 [Pipeline] sh 00:01:57.336 + cat autorun-spdk.conf 00:01:57.337 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.337 SPDK_TEST_NVMF=1 00:01:57.337 SPDK_TEST_NVME_CLI=1 00:01:57.337 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.337 SPDK_TEST_NVMF_NICS=e810 00:01:57.337 SPDK_TEST_VFIOUSER=1 00:01:57.337 SPDK_RUN_UBSAN=1 00:01:57.337 NET_TYPE=phy 00:01:57.337 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.337 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.346 RUN_NIGHTLY=1 00:01:57.351 [Pipeline] readFile 00:01:57.381 [Pipeline] withEnv 00:01:57.384 [Pipeline] { 00:01:57.399 [Pipeline] sh 00:01:57.687 + set -ex 00:01:57.688 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:57.688 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.688 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.688 ++ SPDK_TEST_NVMF=1 00:01:57.688 ++ SPDK_TEST_NVME_CLI=1 00:01:57.688 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.688 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.688 ++ SPDK_TEST_VFIOUSER=1 00:01:57.688 ++ SPDK_RUN_UBSAN=1 00:01:57.688 ++ NET_TYPE=phy 00:01:57.688 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.688 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.688 ++ RUN_NIGHTLY=1 00:01:57.688 + case $SPDK_TEST_NVMF_NICS in 00:01:57.688 + DRIVERS=ice 00:01:57.688 + [[ tcp == \r\d\m\a ]] 00:01:57.688 + [[ -n ice ]] 00:01:57.688 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:57.688 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:07.687 rmmod: ERROR: Module irdma is not currently loaded 00:02:07.687 rmmod: ERROR: Module i40iw is not currently loaded 00:02:07.687 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:07.687 + true 00:02:07.687 + for D in $DRIVERS 00:02:07.687 + sudo modprobe ice 00:02:07.687 + exit 0 00:02:07.696 [Pipeline] } 00:02:07.710 [Pipeline] // withEnv 00:02:07.716 [Pipeline] } 00:02:07.734 [Pipeline] // stage 00:02:07.744 [Pipeline] catchError 00:02:07.746 [Pipeline] { 00:02:07.761 [Pipeline] timeout 00:02:07.761 Timeout set to expire in 50 min 00:02:07.763 [Pipeline] { 00:02:07.777 [Pipeline] stage 00:02:07.779 [Pipeline] { (Tests) 00:02:07.795 [Pipeline] sh 00:02:08.082 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.082 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.082 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.082 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:08.082 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.082 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:08.082 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:08.082 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:08.082 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:08.083 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:08.083 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:08.083 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.083 + source /etc/os-release 00:02:08.083 ++ NAME='Fedora Linux' 00:02:08.083 ++ VERSION='38 (Cloud Edition)' 00:02:08.083 ++ ID=fedora 00:02:08.083 ++ VERSION_ID=38 00:02:08.083 ++ VERSION_CODENAME= 00:02:08.083 ++ PLATFORM_ID=platform:f38 00:02:08.083 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:08.083 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.083 ++ LOGO=fedora-logo-icon 00:02:08.083 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:08.083 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.083 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:08.083 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.083 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.083 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.083 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:08.083 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.083 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:08.083 ++ SUPPORT_END=2024-05-14 00:02:08.083 ++ VARIANT='Cloud Edition' 00:02:08.083 ++ VARIANT_ID=cloud 00:02:08.083 + uname -a 00:02:08.083 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:08.083 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:11.424 Hugepages 00:02:11.424 node hugesize free / total 00:02:11.424 node0 1048576kB 0 / 0 00:02:11.424 node0 2048kB 0 / 0 00:02:11.424 node1 1048576kB 0 / 0 00:02:11.424 node1 2048kB 0 / 0 00:02:11.425 00:02:11.425 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.425 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:11.425 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:11.425 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:11.425 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:11.425 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:11.425 + rm -f /tmp/spdk-ld-path 00:02:11.425 + source autorun-spdk.conf 00:02:11.425 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.425 ++ SPDK_TEST_NVMF=1 00:02:11.425 ++ SPDK_TEST_NVME_CLI=1 00:02:11.425 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.425 ++ SPDK_TEST_NVMF_NICS=e810 00:02:11.425 ++ SPDK_TEST_VFIOUSER=1 00:02:11.425 ++ SPDK_RUN_UBSAN=1 00:02:11.425 ++ NET_TYPE=phy 00:02:11.425 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:11.425 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.425 ++ RUN_NIGHTLY=1 00:02:11.425 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.425 + [[ -n '' ]] 00:02:11.425 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.685 + for M in /var/spdk/build-*-manifest.txt 00:02:11.685 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.685 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:11.685 + for M in /var/spdk/build-*-manifest.txt 00:02:11.685 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.685 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:11.685 ++ uname 00:02:11.685 + [[ Linux == \L\i\n\u\x ]] 00:02:11.685 + sudo dmesg -T 00:02:11.685 + sudo dmesg --clear 00:02:11.685 + dmesg_pid=1590454 00:02:11.685 + [[ Fedora Linux == FreeBSD ]] 00:02:11.685 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.685 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.685 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.685 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.685 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.685 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.685 + sudo dmesg -Tw 00:02:11.685 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.685 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.685 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.685 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.685 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.685 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.685 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.685 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.685 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.685 Test configuration: 00:02:11.685 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.685 SPDK_TEST_NVMF=1 00:02:11.685 SPDK_TEST_NVME_CLI=1 00:02:11.685 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.685 SPDK_TEST_NVMF_NICS=e810 00:02:11.685 SPDK_TEST_VFIOUSER=1 00:02:11.685 SPDK_RUN_UBSAN=1 00:02:11.685 NET_TYPE=phy 00:02:11.685 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:11.686 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.686 RUN_NIGHTLY=1 10:18:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.686 10:18:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.686 10:18:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.686 10:18:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.686 10:18:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.686 10:18:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.686 10:18:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.686 10:18:17 -- paths/export.sh@5 -- $ export PATH 00:02:11.686 10:18:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.686 10:18:17 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.686 10:18:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:11.686 10:18:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721636297.XXXXXX 00:02:11.686 10:18:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721636297.06vmyN 00:02:11.686 10:18:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:11.686 10:18:17 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:11.686 10:18:17 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.686 10:18:17 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:11.686 10:18:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:11.686 10:18:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.686 10:18:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:11.686 10:18:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:11.686 10:18:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.686 10:18:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:11.686 10:18:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:11.686 10:18:17 -- pm/common@17 -- $ local monitor 00:02:11.686 10:18:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.686 10:18:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.686 10:18:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.686 10:18:17 -- pm/common@21 -- $ date +%s 00:02:11.686 10:18:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.686 10:18:17 -- pm/common@25 -- $ sleep 1 00:02:11.686 10:18:17 -- pm/common@21 -- $ date +%s 00:02:11.686 10:18:17 -- pm/common@21 -- $ date +%s 00:02:11.686 10:18:17 -- pm/common@21 -- $ date +%s 00:02:11.686 10:18:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721636297 00:02:11.686 10:18:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721636297 00:02:11.686 10:18:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721636297 00:02:11.686 10:18:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721636297 00:02:11.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721636297_collect-vmstat.pm.log 00:02:11.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721636297_collect-cpu-load.pm.log 00:02:11.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721636297_collect-cpu-temp.pm.log 00:02:11.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721636297_collect-bmc-pm.bmc.pm.log 00:02:12.888 10:18:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:12.888 10:18:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.888 10:18:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.888 10:18:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.888 10:18:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.888 Mon Jul 22 08:18:18 AM UTC 2024 00:02:12.888 10:18:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.888 v24.09-pre-259-g8fb860b73 00:02:12.888 10:18:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:12.888 10:18:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.888 10:18:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.888 10:18:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:12.888 10:18:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:12.888 10:18:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.888 ************************************ 00:02:12.888 START TEST ubsan 00:02:12.888 ************************************ 00:02:12.888 10:18:18 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:12.888 using ubsan 00:02:12.888 00:02:12.888 real 0m0.000s 00:02:12.888 user 0m0.000s 00:02:12.888 sys 0m0.000s 00:02:12.888 10:18:18 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.888 10:18:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.888 ************************************ 00:02:12.888 END TEST ubsan 00:02:12.888 ************************************ 00:02:12.888 10:18:18 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.888 10:18:18 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:12.888 10:18:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:12.888 10:18:18 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:12.888 10:18:18 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:12.888 10:18:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:12.888 10:18:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.888 ************************************ 00:02:12.888 START TEST build_native_dpdk 00:02:12.888 ************************************ 00:02:12.888 10:18:18 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:12.888 caf0f5d395 version: 22.11.4 00:02:12.888 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:12.888 dc9c799c7d vhost: fix missing spinlock unlock 00:02:12.888 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:12.888 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:12.888 10:18:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:12.888 10:18:18 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:12.889 10:18:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:12.889 patching file config/rte_config.h 00:02:12.889 Hunk #1 succeeded at 60 (offset 1 line). 00:02:12.889 10:18:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:12.889 10:18:18 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:13.150 10:18:18 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:13.150 patching file lib/pcapng/rte_pcapng.c 00:02:13.150 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:13.150 10:18:18 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:18.435 The Meson build system 00:02:18.435 Version: 1.3.1 00:02:18.435 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:18.435 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:18.435 Build type: native build 00:02:18.435 Program cat found: YES (/usr/bin/cat) 00:02:18.435 Project name: DPDK 00:02:18.435 Project version: 22.11.4 00:02:18.435 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:18.435 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:18.435 Host machine cpu family: x86_64 00:02:18.435 Host machine cpu: x86_64 00:02:18.435 Message: ## Building in Developer Mode ## 00:02:18.435 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.435 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:18.435 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.435 Program objdump found: YES (/usr/bin/objdump) 00:02:18.435 Program python3 found: YES (/usr/bin/python3) 00:02:18.435 Program cat found: YES (/usr/bin/cat) 00:02:18.435 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:18.435 Checking for size of "void *" : 8 00:02:18.435 Checking for size of "void *" : 8 (cached) 00:02:18.435 Library m found: YES 00:02:18.435 Library numa found: YES 00:02:18.435 Has header "numaif.h" : YES 00:02:18.435 Library fdt found: NO 00:02:18.435 Library execinfo found: NO 00:02:18.435 Has header "execinfo.h" : YES 00:02:18.435 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:18.435 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.435 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.435 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.435 Run-time dependency openssl found: YES 3.0.9 00:02:18.435 Run-time dependency libpcap found: YES 1.10.4 00:02:18.435 Has header "pcap.h" with dependency libpcap: YES 00:02:18.435 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.435 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.435 Compiler for C supports arguments -Wformat: YES 00:02:18.435 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.435 Compiler for C supports arguments -Wformat-security: NO 00:02:18.435 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.435 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.435 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.435 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.435 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.435 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.435 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.435 Compiler for C supports arguments -Wundef: YES 00:02:18.435 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.435 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.435 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.435 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.435 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.435 Compiler for C supports arguments -mavx512f: YES 00:02:18.435 Checking if "AVX512 checking" compiles: YES 00:02:18.435 Fetching value of define "__SSE4_2__" : 1 00:02:18.435 Fetching value of define "__AES__" : 1 00:02:18.435 Fetching value of define "__AVX__" : 1 00:02:18.435 Fetching value of define "__AVX2__" : 1 00:02:18.435 Fetching value of define "__AVX512BW__" : 1 00:02:18.435 Fetching value of define "__AVX512CD__" : 1 00:02:18.435 Fetching value of define "__AVX512DQ__" : 1 00:02:18.435 Fetching value of define "__AVX512F__" : 1 00:02:18.435 Fetching value of define "__AVX512VL__" : 1 00:02:18.435 Fetching value of define "__PCLMUL__" : 1 00:02:18.436 Fetching value of define "__RDRND__" : 1 00:02:18.436 Fetching value of define "__RDSEED__" : 1 00:02:18.436 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:18.436 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.436 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.436 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.436 Checking for function "getentropy" : YES 00:02:18.436 Message: lib/eal: Defining dependency "eal" 00:02:18.436 Message: lib/ring: Defining dependency "ring" 00:02:18.436 Message: lib/rcu: Defining dependency "rcu" 00:02:18.436 Message: lib/mempool: Defining dependency "mempool" 00:02:18.436 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.436 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:18.436 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:18.436 Compiler for C supports arguments -mpclmul: YES 00:02:18.436 Compiler for C supports arguments -maes: YES 00:02:18.436 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.436 Compiler for C supports arguments -mavx512bw: YES 00:02:18.436 Compiler for C supports arguments -mavx512dq: YES 00:02:18.436 Compiler for C supports arguments -mavx512vl: YES 00:02:18.436 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.436 Compiler for C supports arguments -mavx2: YES 00:02:18.436 Compiler for C supports arguments -mavx: YES 00:02:18.436 Message: lib/net: Defining dependency "net" 00:02:18.436 Message: lib/meter: Defining dependency "meter" 00:02:18.436 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.436 Message: lib/pci: Defining dependency "pci" 00:02:18.436 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.436 Message: lib/metrics: Defining dependency "metrics" 00:02:18.436 Message: lib/hash: Defining dependency "hash" 00:02:18.436 Message: lib/timer: Defining dependency "timer" 00:02:18.436 Fetching value of define "__AVX2__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.436 Message: lib/acl: Defining dependency "acl" 00:02:18.436 Message: lib/bbdev: Defining dependency "bbdev" 00:02:18.436 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:18.436 Run-time dependency libelf found: YES 0.190 00:02:18.436 Message: lib/bpf: Defining dependency "bpf" 00:02:18.436 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:18.436 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.436 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.436 Message: lib/distributor: Defining dependency "distributor" 00:02:18.436 Message: lib/efd: Defining dependency "efd" 00:02:18.436 Message: lib/eventdev: Defining dependency "eventdev" 00:02:18.436 Message: lib/gpudev: Defining dependency "gpudev" 00:02:18.436 Message: lib/gro: Defining dependency "gro" 00:02:18.436 Message: lib/gso: Defining dependency "gso" 00:02:18.436 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:18.436 Message: lib/jobstats: Defining dependency "jobstats" 00:02:18.436 Message: lib/latencystats: Defining dependency "latencystats" 00:02:18.436 Message: lib/lpm: Defining dependency "lpm" 00:02:18.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512IFMA__" : 1 00:02:18.436 Message: lib/member: Defining dependency "member" 00:02:18.436 Message: lib/pcapng: Defining dependency "pcapng" 00:02:18.436 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.436 Message: lib/power: Defining dependency "power" 00:02:18.436 Message: lib/rawdev: Defining dependency "rawdev" 00:02:18.436 Message: lib/regexdev: Defining dependency "regexdev" 00:02:18.436 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.436 Message: lib/rib: Defining dependency "rib" 00:02:18.436 Message: lib/reorder: Defining dependency "reorder" 00:02:18.436 Message: lib/sched: Defining dependency "sched" 00:02:18.436 Message: lib/security: Defining dependency "security" 00:02:18.436 Message: lib/stack: Defining dependency "stack" 00:02:18.436 Has header "linux/userfaultfd.h" : YES 00:02:18.436 Message: lib/vhost: Defining dependency "vhost" 00:02:18.436 Message: lib/ipsec: Defining dependency "ipsec" 00:02:18.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:18.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.436 Message: lib/fib: Defining dependency "fib" 00:02:18.436 Message: lib/port: Defining dependency "port" 00:02:18.436 Message: lib/pdump: Defining dependency "pdump" 00:02:18.436 Message: lib/table: Defining dependency "table" 00:02:18.436 Message: lib/pipeline: Defining dependency "pipeline" 00:02:18.436 Message: lib/graph: Defining dependency "graph" 00:02:18.436 Message: lib/node: Defining dependency "node" 00:02:18.436 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.436 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.436 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.436 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.436 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:18.436 Compiler for C supports arguments -Wno-unused-value: YES 00:02:18.436 Compiler for C supports arguments -Wno-format: YES 00:02:18.436 Compiler for C supports arguments -Wno-format-security: YES 00:02:18.436 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:18.436 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.008 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.008 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.008 Fetching value of define "__AVX2__" : 1 (cached) 00:02:19.008 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.008 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.008 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.008 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.008 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.008 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.008 Program doxygen found: YES (/usr/bin/doxygen) 00:02:19.008 Configuring doxy-api.conf using configuration 00:02:19.008 Program sphinx-build found: NO 00:02:19.008 Configuring rte_build_config.h using configuration 00:02:19.008 Message: 00:02:19.008 ================= 00:02:19.008 Applications Enabled 00:02:19.008 ================= 00:02:19.008 00:02:19.008 apps: 00:02:19.008 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:19.008 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:19.008 test-security-perf, 00:02:19.008 00:02:19.008 Message: 00:02:19.008 ================= 00:02:19.008 Libraries Enabled 00:02:19.008 ================= 00:02:19.008 00:02:19.008 libs: 00:02:19.008 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:19.008 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:19.008 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:19.008 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:19.008 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:19.008 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:19.008 table, pipeline, graph, node, 00:02:19.008 00:02:19.008 Message: 00:02:19.008 =============== 00:02:19.008 Drivers Enabled 00:02:19.008 =============== 00:02:19.008 00:02:19.008 common: 00:02:19.008 00:02:19.008 bus: 00:02:19.008 pci, vdev, 00:02:19.008 mempool: 00:02:19.008 ring, 00:02:19.008 dma: 00:02:19.008 00:02:19.008 net: 00:02:19.008 i40e, 00:02:19.008 raw: 00:02:19.008 00:02:19.008 crypto: 00:02:19.008 00:02:19.008 compress: 00:02:19.008 00:02:19.008 regex: 00:02:19.008 00:02:19.008 vdpa: 00:02:19.008 00:02:19.008 event: 00:02:19.008 00:02:19.008 baseband: 00:02:19.008 00:02:19.008 gpu: 00:02:19.008 00:02:19.008 00:02:19.008 Message: 00:02:19.008 ================= 00:02:19.008 Content Skipped 00:02:19.008 ================= 00:02:19.008 00:02:19.008 apps: 00:02:19.008 00:02:19.008 libs: 00:02:19.008 kni: explicitly disabled via build config (deprecated lib) 00:02:19.008 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:19.008 00:02:19.008 drivers: 00:02:19.008 common/cpt: not in enabled drivers build config 00:02:19.008 common/dpaax: not in enabled drivers build config 00:02:19.008 common/iavf: not in enabled drivers build config 00:02:19.008 common/idpf: not in enabled drivers build config 00:02:19.008 common/mvep: not in enabled drivers build config 00:02:19.008 common/octeontx: not in enabled drivers build config 00:02:19.008 bus/auxiliary: not in enabled drivers build config 00:02:19.008 bus/dpaa: not in enabled drivers build config 00:02:19.008 bus/fslmc: not in enabled drivers build config 00:02:19.008 bus/ifpga: not in enabled drivers build config 00:02:19.008 bus/vmbus: not in enabled drivers build config 00:02:19.008 common/cnxk: not in enabled drivers build config 00:02:19.009 common/mlx5: not in enabled drivers build config 00:02:19.009 common/qat: not in enabled drivers build config 00:02:19.009 common/sfc_efx: not in enabled drivers build config 00:02:19.009 mempool/bucket: not in enabled drivers build config 00:02:19.009 mempool/cnxk: not in enabled drivers build config 00:02:19.009 mempool/dpaa: not in enabled drivers build config 00:02:19.009 mempool/dpaa2: not in enabled drivers build config 00:02:19.009 mempool/octeontx: not in enabled drivers build config 00:02:19.009 mempool/stack: not in enabled drivers build config 00:02:19.009 dma/cnxk: not in enabled drivers build config 00:02:19.009 dma/dpaa: not in enabled drivers build config 00:02:19.009 dma/dpaa2: not in enabled drivers build config 00:02:19.009 dma/hisilicon: not in enabled drivers build config 00:02:19.009 dma/idxd: not in enabled drivers build config 00:02:19.009 dma/ioat: not in enabled drivers build config 00:02:19.009 dma/skeleton: not in enabled drivers build config 00:02:19.009 net/af_packet: not in enabled drivers build config 00:02:19.009 net/af_xdp: not in enabled drivers build config 00:02:19.009 net/ark: not in enabled drivers build config 00:02:19.009 net/atlantic: not in enabled drivers build config 00:02:19.009 net/avp: not in enabled drivers build config 00:02:19.009 net/axgbe: not in enabled drivers build config 00:02:19.009 net/bnx2x: not in enabled drivers build config 00:02:19.009 net/bnxt: not in enabled drivers build config 00:02:19.009 net/bonding: not in enabled drivers build config 00:02:19.009 net/cnxk: not in enabled drivers build config 00:02:19.009 net/cxgbe: not in enabled drivers build config 00:02:19.009 net/dpaa: not in enabled drivers build config 00:02:19.009 net/dpaa2: not in enabled drivers build config 00:02:19.009 net/e1000: not in enabled drivers build config 00:02:19.009 net/ena: not in enabled drivers build config 00:02:19.009 net/enetc: not in enabled drivers build config 00:02:19.009 net/enetfec: not in enabled drivers build config 00:02:19.009 net/enic: not in enabled drivers build config 00:02:19.009 net/failsafe: not in enabled drivers build config 00:02:19.009 net/fm10k: not in enabled drivers build config 00:02:19.009 net/gve: not in enabled drivers build config 00:02:19.009 net/hinic: not in enabled drivers build config 00:02:19.009 net/hns3: not in enabled drivers build config 00:02:19.009 net/iavf: not in enabled drivers build config 00:02:19.009 net/ice: not in enabled drivers build config 00:02:19.009 net/idpf: not in enabled drivers build config 00:02:19.009 net/igc: not in enabled drivers build config 00:02:19.009 net/ionic: not in enabled drivers build config 00:02:19.009 net/ipn3ke: not in enabled drivers build config 00:02:19.009 net/ixgbe: not in enabled drivers build config 00:02:19.009 net/kni: not in enabled drivers build config 00:02:19.009 net/liquidio: not in enabled drivers build config 00:02:19.009 net/mana: not in enabled drivers build config 00:02:19.009 net/memif: not in enabled drivers build config 00:02:19.009 net/mlx4: not in enabled drivers build config 00:02:19.009 net/mlx5: not in enabled drivers build config 00:02:19.009 net/mvneta: not in enabled drivers build config 00:02:19.009 net/mvpp2: not in enabled drivers build config 00:02:19.009 net/netvsc: not in enabled drivers build config 00:02:19.009 net/nfb: not in enabled drivers build config 00:02:19.009 net/nfp: not in enabled drivers build config 00:02:19.009 net/ngbe: not in enabled drivers build config 00:02:19.009 net/null: not in enabled drivers build config 00:02:19.009 net/octeontx: not in enabled drivers build config 00:02:19.009 net/octeon_ep: not in enabled drivers build config 00:02:19.009 net/pcap: not in enabled drivers build config 00:02:19.009 net/pfe: not in enabled drivers build config 00:02:19.009 net/qede: not in enabled drivers build config 00:02:19.009 net/ring: not in enabled drivers build config 00:02:19.009 net/sfc: not in enabled drivers build config 00:02:19.009 net/softnic: not in enabled drivers build config 00:02:19.009 net/tap: not in enabled drivers build config 00:02:19.009 net/thunderx: not in enabled drivers build config 00:02:19.009 net/txgbe: not in enabled drivers build config 00:02:19.009 net/vdev_netvsc: not in enabled drivers build config 00:02:19.009 net/vhost: not in enabled drivers build config 00:02:19.009 net/virtio: not in enabled drivers build config 00:02:19.009 net/vmxnet3: not in enabled drivers build config 00:02:19.009 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.009 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.009 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.009 raw/ifpga: not in enabled drivers build config 00:02:19.009 raw/ntb: not in enabled drivers build config 00:02:19.009 raw/skeleton: not in enabled drivers build config 00:02:19.009 crypto/armv8: not in enabled drivers build config 00:02:19.009 crypto/bcmfs: not in enabled drivers build config 00:02:19.009 crypto/caam_jr: not in enabled drivers build config 00:02:19.009 crypto/ccp: not in enabled drivers build config 00:02:19.009 crypto/cnxk: not in enabled drivers build config 00:02:19.009 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.009 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.009 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.009 crypto/mlx5: not in enabled drivers build config 00:02:19.009 crypto/mvsam: not in enabled drivers build config 00:02:19.009 crypto/nitrox: not in enabled drivers build config 00:02:19.009 crypto/null: not in enabled drivers build config 00:02:19.009 crypto/octeontx: not in enabled drivers build config 00:02:19.009 crypto/openssl: not in enabled drivers build config 00:02:19.009 crypto/scheduler: not in enabled drivers build config 00:02:19.009 crypto/uadk: not in enabled drivers build config 00:02:19.009 crypto/virtio: not in enabled drivers build config 00:02:19.009 compress/isal: not in enabled drivers build config 00:02:19.009 compress/mlx5: not in enabled drivers build config 00:02:19.009 compress/octeontx: not in enabled drivers build config 00:02:19.009 compress/zlib: not in enabled drivers build config 00:02:19.009 regex/mlx5: not in enabled drivers build config 00:02:19.009 regex/cn9k: not in enabled drivers build config 00:02:19.009 vdpa/ifc: not in enabled drivers build config 00:02:19.009 vdpa/mlx5: not in enabled drivers build config 00:02:19.009 vdpa/sfc: not in enabled drivers build config 00:02:19.009 event/cnxk: not in enabled drivers build config 00:02:19.009 event/dlb2: not in enabled drivers build config 00:02:19.009 event/dpaa: not in enabled drivers build config 00:02:19.009 event/dpaa2: not in enabled drivers build config 00:02:19.009 event/dsw: not in enabled drivers build config 00:02:19.009 event/opdl: not in enabled drivers build config 00:02:19.009 event/skeleton: not in enabled drivers build config 00:02:19.009 event/sw: not in enabled drivers build config 00:02:19.009 event/octeontx: not in enabled drivers build config 00:02:19.009 baseband/acc: not in enabled drivers build config 00:02:19.009 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.009 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.009 baseband/la12xx: not in enabled drivers build config 00:02:19.009 baseband/null: not in enabled drivers build config 00:02:19.009 baseband/turbo_sw: not in enabled drivers build config 00:02:19.009 gpu/cuda: not in enabled drivers build config 00:02:19.009 00:02:19.009 00:02:19.009 Build targets in project: 309 00:02:19.009 00:02:19.009 DPDK 22.11.4 00:02:19.009 00:02:19.009 User defined options 00:02:19.009 libdir : lib 00:02:19.009 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:19.009 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.009 c_link_args : 00:02:19.009 enable_docs : false 00:02:19.009 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.009 enable_kmods : false 00:02:19.009 machine : native 00:02:19.009 tests : false 00:02:19.009 00:02:19.009 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.009 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.279 10:18:24 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:19.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:19.279 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:19.279 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:19.279 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:19.279 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:19.279 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.279 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.279 [7/738] Generating lib/rte_eal_mingw with a custom command 00:02:19.279 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.279 [9/738] Generating lib/rte_rcu_def with a custom command 00:02:19.279 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.279 [11/738] Generating lib/rte_eal_def with a custom command 00:02:19.279 [12/738] Generating lib/rte_ring_mingw with a custom command 00:02:19.279 [13/738] Generating lib/rte_rcu_mingw with a custom command 00:02:19.279 [14/738] Generating lib/rte_mempool_def with a custom command 00:02:19.279 [15/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.541 [16/738] Generating lib/rte_net_def with a custom command 00:02:19.541 [17/738] Generating lib/rte_meter_mingw with a custom command 00:02:19.541 [18/738] Generating lib/rte_mempool_mingw with a custom command 00:02:19.541 [19/738] Generating lib/rte_mbuf_def with a custom command 00:02:19.541 [20/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:19.541 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.541 [22/738] Generating lib/rte_ring_def with a custom command 00:02:19.541 [23/738] Generating lib/rte_net_mingw with a custom command 00:02:19.541 [24/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.541 [25/738] Generating lib/rte_meter_def with a custom command 00:02:19.541 [26/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:19.541 [27/738] Generating lib/rte_pci_def with a custom command 00:02:19.541 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.541 [29/738] Generating lib/rte_ethdev_def with a custom command 00:02:19.541 [30/738] Generating lib/rte_pci_mingw with a custom command 00:02:19.541 [31/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:19.541 [32/738] Generating lib/rte_cmdline_def with a custom command 00:02:19.541 [33/738] Linking static target lib/librte_kvargs.a 00:02:19.541 [34/738] Generating lib/rte_hash_def with a custom command 00:02:19.541 [35/738] Generating lib/rte_metrics_mingw with a custom command 00:02:19.541 [36/738] Generating lib/rte_hash_mingw with a custom command 00:02:19.541 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.541 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.541 [39/738] Generating lib/rte_metrics_def with a custom command 00:02:19.541 [40/738] Generating lib/rte_timer_mingw with a custom command 00:02:19.541 [41/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:19.541 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.541 [43/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.541 [44/738] Generating lib/rte_timer_def with a custom command 00:02:19.541 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.541 [46/738] Generating lib/rte_bitratestats_def with a custom command 00:02:19.541 [47/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.541 [48/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.541 [49/738] Generating lib/rte_acl_mingw with a custom command 00:02:19.541 [50/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.541 [51/738] Generating lib/rte_bbdev_def with a custom command 00:02:19.541 [52/738] Generating lib/rte_acl_def with a custom command 00:02:19.541 [53/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:19.541 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.541 [55/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:19.541 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.541 [57/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.541 [58/738] Generating lib/rte_bpf_mingw with a custom command 00:02:19.541 [59/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.541 [60/738] Generating lib/rte_cfgfile_def with a custom command 00:02:19.541 [61/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:19.541 [62/738] Generating lib/rte_bpf_def with a custom command 00:02:19.541 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.541 [64/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:19.541 [65/738] Generating lib/rte_compressdev_def with a custom command 00:02:19.541 [66/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.541 [67/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.541 [68/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.541 [69/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.541 [70/738] Generating lib/rte_cryptodev_def with a custom command 00:02:19.541 [71/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.541 [72/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:19.541 [73/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.541 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.541 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.541 [76/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.541 [77/738] Generating lib/rte_distributor_mingw with a custom command 00:02:19.541 [78/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.541 [79/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.541 [80/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.541 [81/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.541 [82/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.541 [83/738] Generating lib/rte_efd_def with a custom command 00:02:19.541 [84/738] Generating lib/rte_efd_mingw with a custom command 00:02:19.541 [85/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.541 [86/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.541 [87/738] Generating lib/rte_distributor_def with a custom command 00:02:19.541 [88/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.541 [89/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.541 [90/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.541 [91/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.541 [92/738] Linking static target lib/librte_pci.a 00:02:19.541 [93/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.803 [94/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.803 [95/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:19.803 [96/738] Generating lib/rte_gpudev_def with a custom command 00:02:19.803 [97/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:19.803 [98/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.803 [99/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.803 [100/738] Generating lib/rte_eventdev_def with a custom command 00:02:19.803 [101/738] Generating lib/rte_gro_def with a custom command 00:02:19.803 [102/738] Generating lib/rte_gro_mingw with a custom command 00:02:19.803 [103/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.803 [104/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.803 [105/738] Generating lib/rte_gso_def with a custom command 00:02:19.803 [106/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.803 [107/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.803 [108/738] Generating lib/rte_gso_mingw with a custom command 00:02:19.803 [109/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:19.803 [110/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.803 [111/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.803 [112/738] Linking static target lib/librte_meter.a 00:02:19.803 [113/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:19.803 [114/738] Generating lib/rte_ip_frag_def with a custom command 00:02:19.803 [115/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.803 [116/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.803 [117/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.803 [118/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.803 [119/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:19.803 [120/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.803 [121/738] Generating lib/rte_jobstats_def with a custom command 00:02:19.803 [122/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.803 [123/738] Generating lib/rte_latencystats_def with a custom command 00:02:19.803 [124/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:19.803 [125/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.803 [126/738] Generating lib/rte_lpm_def with a custom command 00:02:19.803 [127/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.803 [128/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.803 [129/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.803 [130/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.803 [131/738] Generating lib/rte_lpm_mingw with a custom command 00:02:19.803 [132/738] Generating lib/rte_member_def with a custom command 00:02:19.803 [133/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.803 [134/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.803 [135/738] Generating lib/rte_member_mingw with a custom command 00:02:19.803 [136/738] Linking static target lib/librte_ring.a 00:02:19.803 [137/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.803 [138/738] Generating lib/rte_pcapng_def with a custom command 00:02:19.803 [139/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:19.803 [140/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.803 [141/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.803 [142/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.803 [143/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.803 [144/738] Generating lib/rte_power_def with a custom command 00:02:19.803 [145/738] Generating lib/rte_power_mingw with a custom command 00:02:19.803 [146/738] Generating lib/rte_rawdev_def with a custom command 00:02:19.803 [147/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:19.803 [148/738] Generating lib/rte_regexdev_def with a custom command 00:02:19.803 [149/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:19.803 [150/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:19.803 [151/738] Generating lib/rte_dmadev_def with a custom command 00:02:19.803 [152/738] Generating lib/rte_rib_def with a custom command 00:02:20.067 [153/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.067 [154/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.067 [155/738] Generating lib/rte_rib_mingw with a custom command 00:02:20.067 [156/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.067 [157/738] Generating lib/rte_reorder_def with a custom command 00:02:20.067 [158/738] Generating lib/rte_reorder_mingw with a custom command 00:02:20.067 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.067 [160/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.067 [161/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.067 [162/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:20.067 [163/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.067 [164/738] Linking static target lib/librte_jobstats.a 00:02:20.067 [165/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.067 [166/738] Generating lib/rte_security_def with a custom command 00:02:20.067 [167/738] Generating lib/rte_sched_def with a custom command 00:02:20.067 [168/738] Generating lib/rte_sched_mingw with a custom command 00:02:20.067 [169/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.067 [170/738] Generating lib/rte_security_mingw with a custom command 00:02:20.067 [171/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.067 [172/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:20.067 [173/738] Generating lib/rte_stack_def with a custom command 00:02:20.067 [174/738] Linking static target lib/librte_cfgfile.a 00:02:20.067 [175/738] Generating lib/rte_stack_mingw with a custom command 00:02:20.067 [176/738] Linking target lib/librte_kvargs.so.23.0 00:02:20.067 [177/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:20.067 [178/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.067 [179/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.067 [180/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:20.067 [181/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:20.067 [182/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.067 [183/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:20.067 [184/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:20.067 [185/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.067 [186/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:20.067 [187/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.067 [188/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.067 [189/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:20.067 [190/738] Generating lib/rte_vhost_def with a custom command 00:02:20.067 [191/738] Generating lib/rte_vhost_mingw with a custom command 00:02:20.067 [192/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.067 [193/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:20.067 [194/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.067 [195/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.067 [196/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:20.067 [197/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:20.067 [198/738] Generating lib/rte_ipsec_def with a custom command 00:02:20.067 [199/738] Linking static target lib/librte_stack.a 00:02:20.067 [200/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.067 [201/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:20.067 [202/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.329 [203/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.329 [204/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.329 [205/738] Generating lib/rte_fib_mingw with a custom command 00:02:20.329 [206/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.329 [207/738] Generating lib/rte_fib_def with a custom command 00:02:20.329 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:20.329 [209/738] Linking static target lib/librte_cmdline.a 00:02:20.329 [210/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:20.329 [211/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.329 [212/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.329 [213/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.329 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:20.329 [215/738] Linking static target lib/librte_timer.a 00:02:20.329 [216/738] Linking static target lib/librte_telemetry.a 00:02:20.329 [217/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:20.329 [218/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.329 [219/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.329 [220/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.329 [221/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:20.329 [222/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.329 [223/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.329 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.329 [225/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.329 [226/738] Linking static target lib/librte_metrics.a 00:02:20.329 [227/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.329 [228/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:20.329 [229/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:20.329 [230/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.329 [231/738] Generating lib/rte_port_def with a custom command 00:02:20.329 [232/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.329 [233/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.329 [234/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.329 [235/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.329 [236/738] Generating lib/rte_port_mingw with a custom command 00:02:20.329 [237/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:20.329 [238/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.329 [239/738] Generating lib/rte_pdump_mingw with a custom command 00:02:20.329 [240/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:20.329 [241/738] Generating lib/rte_pdump_def with a custom command 00:02:20.329 [242/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:20.329 [243/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.329 [244/738] Linking static target lib/librte_bitratestats.a 00:02:20.329 [245/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.329 [246/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:20.329 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:20.329 [248/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.329 [249/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:20.329 [250/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.329 [251/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:20.329 [252/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:20.329 [253/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.329 [254/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.329 [255/738] Generating lib/rte_table_def with a custom command 00:02:20.329 [256/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:20.329 [257/738] Generating lib/rte_table_mingw with a custom command 00:02:20.329 [258/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.329 [259/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.329 [260/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.329 [261/738] Linking static target lib/librte_net.a 00:02:20.329 [262/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.329 [263/738] Linking static target lib/librte_rawdev.a 00:02:20.329 [264/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.329 [265/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.329 [266/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:20.588 [267/738] Generating lib/rte_pipeline_def with a custom command 00:02:20.588 [268/738] Generating lib/rte_graph_def with a custom command 00:02:20.588 [269/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.588 [270/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:20.588 [271/738] Generating lib/rte_graph_mingw with a custom command 00:02:20.588 [272/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:20.588 [273/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.588 [274/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.588 [275/738] Linking static target lib/librte_dmadev.a 00:02:20.588 [276/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.588 [277/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.588 [278/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.588 [279/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.588 [280/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:20.588 [281/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.588 [282/738] Generating lib/rte_node_def with a custom command 00:02:20.588 [283/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:20.588 [284/738] Generating lib/rte_node_mingw with a custom command 00:02:20.588 [285/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.588 [286/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:20.588 [287/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.588 [288/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:20.588 [289/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.588 [290/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.588 [291/738] Linking static target lib/librte_compressdev.a 00:02:20.588 [292/738] Linking static target lib/librte_gpudev.a 00:02:20.588 [293/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:20.588 [294/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:20.589 [295/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.589 [296/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.589 [297/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:20.589 [298/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:20.589 [299/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.589 [300/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:20.589 [301/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.589 [302/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:20.589 [303/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:20.589 [304/738] Linking static target lib/librte_rcu.a 00:02:20.589 [305/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.589 [306/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:20.589 [307/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.589 [308/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:20.589 [309/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:20.589 [310/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:20.589 [311/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.589 [312/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.589 [313/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:20.589 [314/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.589 [315/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.589 [316/738] Linking static target lib/librte_reorder.a 00:02:20.589 [317/738] Linking static target lib/librte_latencystats.a 00:02:20.589 [318/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.589 [319/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.589 [320/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.589 [321/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:20.589 [322/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.589 [323/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.849 [324/738] Linking static target lib/librte_power.a 00:02:20.849 [325/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:20.849 [326/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.849 [327/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:20.849 [328/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.849 [329/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.849 [330/738] Linking static target lib/librte_regexdev.a 00:02:20.849 [331/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:20.849 [332/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.849 [333/738] Linking static target lib/librte_gro.a 00:02:20.849 [334/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:20.849 [335/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:20.849 [336/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.849 [337/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:20.849 [338/738] Linking static target lib/librte_bbdev.a 00:02:20.849 [339/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.849 [340/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.849 [341/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.849 [342/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:20.849 [343/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.849 [344/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.849 [345/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:20.849 [346/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:20.849 [347/738] Linking target lib/librte_telemetry.so.23.0 00:02:20.849 [348/738] Linking static target lib/librte_mempool.a 00:02:20.849 [349/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.849 [350/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.849 [351/738] Linking static target lib/librte_gso.a 00:02:20.849 [352/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.849 [353/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.849 [354/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.849 [355/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:20.849 [356/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.849 [357/738] Linking static target lib/librte_distributor.a 00:02:20.849 [358/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:20.849 [359/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:20.849 [360/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:20.849 [361/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.849 [362/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:20.849 [363/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:20.849 [364/738] Linking static target lib/librte_ip_frag.a 00:02:20.849 [365/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.849 [366/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.849 [367/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.849 [368/738] Linking static target lib/librte_security.a 00:02:21.116 [369/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.116 [370/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.116 [371/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.116 [372/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.116 [373/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:21.116 [374/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.116 [375/738] Linking static target lib/librte_pcapng.a 00:02:21.116 [376/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:21.116 [377/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:21.116 [378/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:21.116 [379/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.116 [380/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:21.116 [381/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:21.116 [382/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.116 [383/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:21.116 [384/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:21.116 [385/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:21.116 [386/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:21.116 [387/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.116 [388/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.116 [389/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:21.116 [390/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:21.116 [391/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:21.116 [392/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:21.116 [393/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:21.116 [394/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:21.116 [395/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.116 [396/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:21.116 [397/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:21.116 [398/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:21.116 [399/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.116 [400/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:21.116 [401/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:21.116 [402/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:21.116 [403/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:21.116 [404/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:21.116 [405/738] Linking static target lib/librte_eal.a 00:02:21.116 [406/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.116 [407/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:21.116 [408/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:21.116 [409/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:21.116 [410/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.116 [411/738] Linking static target lib/librte_graph.a 00:02:21.116 [412/738] Linking static target lib/librte_rib.a 00:02:21.379 [413/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.380 [414/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:21.380 [415/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.380 [416/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:21.380 [417/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.380 [418/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.380 [419/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:21.380 [420/738] Linking static target drivers/librte_bus_vdev.a 00:02:21.380 [421/738] Linking static target lib/librte_bpf.a 00:02:21.380 [422/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.380 [423/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:21.380 [424/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:21.380 [425/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:21.380 [426/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:21.380 [427/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.380 [428/738] Linking static target lib/librte_lpm.a 00:02:21.380 [429/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:21.380 [430/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:21.380 [431/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:21.380 [432/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:21.380 [433/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:21.380 [434/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.380 [435/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:21.380 [436/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:21.380 [437/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.380 [438/738] Linking static target lib/librte_mbuf.a 00:02:21.380 [439/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.380 [440/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:21.380 [441/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:21.380 [442/738] Linking static target lib/librte_fib.a 00:02:21.380 [443/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.380 [444/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:21.639 [445/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:21.639 [446/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.639 [447/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:21.639 [448/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:21.639 [449/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:21.639 [450/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:21.639 [451/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.639 [452/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:21.639 [453/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.639 [454/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:21.639 [455/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:21.639 [456/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:21.639 [457/738] Linking static target drivers/librte_bus_pci.a 00:02:21.639 [458/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:21.639 [459/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:21.639 [460/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [461/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [462/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:21.639 [463/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:21.639 [464/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.639 [465/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.639 [466/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [467/738] Linking static target lib/librte_efd.a 00:02:21.639 [468/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [469/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:21.639 [470/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.639 [471/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:21.639 [472/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:21.639 [473/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [474/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:21.639 [475/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:21.639 [476/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [477/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:21.639 [478/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.639 [479/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:21.901 [480/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:21.901 [481/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:21.901 [482/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:21.901 [483/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:21.901 [484/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [485/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:21.901 [486/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [487/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.901 [488/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:21.901 [489/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:21.901 [490/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:21.901 [491/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:21.901 [492/738] Linking static target lib/librte_pdump.a 00:02:21.901 [493/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:21.901 [494/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:21.901 [495/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [496/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:21.901 [497/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:21.901 [498/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:21.901 [499/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:21.901 [500/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:21.901 [501/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:21.901 [502/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [503/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:21.901 [504/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:21.901 [505/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [506/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:21.901 [507/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [508/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.901 [509/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.901 [510/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:21.901 [511/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.901 [512/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:21.901 [513/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:21.901 [514/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:21.901 [515/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:21.901 [516/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [517/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:21.901 [518/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [519/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:21.901 [520/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:21.901 [521/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:21.901 [522/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:21.901 [523/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.901 [524/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:21.901 [525/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:21.901 [526/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:21.901 [527/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:22.161 [528/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:22.161 [529/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:22.161 [530/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.161 [531/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:22.161 [532/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:22.161 [533/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:22.161 [534/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:22.161 [535/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.162 [536/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.162 [537/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:22.162 [538/738] Linking static target drivers/librte_mempool_ring.a 00:02:22.162 [539/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.162 [540/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:22.162 [541/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:22.162 [542/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:22.162 [543/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:22.162 [544/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.162 [545/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:22.162 [546/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:22.162 [547/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:22.162 [548/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:22.162 [549/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:22.162 [550/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:22.162 [551/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:22.162 [552/738] Linking static target lib/librte_node.a 00:02:22.162 [553/738] Linking static target lib/librte_sched.a 00:02:22.162 [554/738] Linking static target lib/librte_member.a 00:02:22.162 [555/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:22.162 [556/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:22.162 [557/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:22.162 [558/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:22.162 [559/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:22.162 [560/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:22.162 [561/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:22.162 [562/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:22.162 [563/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:22.162 [564/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:22.162 [565/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:22.162 [566/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:22.162 [567/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.162 [568/738] Linking static target lib/librte_table.a 00:02:22.162 [569/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:22.162 [570/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:22.162 [571/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:22.162 [572/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:22.162 [573/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:22.422 [574/738] Linking static target lib/librte_ipsec.a 00:02:22.422 [575/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:22.422 [576/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.422 [577/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:22.422 [578/738] Linking static target lib/librte_cryptodev.a 00:02:22.422 [579/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:22.422 [580/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:22.422 [581/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:22.422 [582/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:22.422 [583/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:22.422 [584/738] Linking static target lib/librte_port.a 00:02:22.422 [585/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.422 [586/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:22.422 [587/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.422 [588/738] Linking static target lib/librte_ethdev.a 00:02:22.681 [589/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:22.681 [590/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:22.681 [591/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:22.681 [592/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:22.681 [593/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.681 [594/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:22.681 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:22.681 [596/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:22.681 [597/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:22.681 [598/738] Linking static target lib/librte_eventdev.a 00:02:22.681 [599/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:22.681 [600/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:22.681 [601/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.681 [602/738] Linking static target lib/librte_hash.a 00:02:22.681 [603/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:22.681 [604/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.681 [605/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:22.681 [606/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:22.681 [607/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.940 [608/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:22.940 [609/738] Linking static target lib/librte_acl.a 00:02:22.940 [610/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:23.199 [611/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:23.199 [612/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:23.199 [613/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.199 [614/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.457 [615/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.457 [616/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:23.715 [617/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:23.715 [618/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:23.715 [619/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.650 [620/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:24.650 [621/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:24.650 [622/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.650 [623/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:24.650 [624/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.650 [625/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.650 [626/738] Linking static target drivers/librte_net_i40e.a 00:02:25.220 [627/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:25.480 [628/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:25.741 [629/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.741 [630/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.741 [631/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.944 [632/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:29.944 [633/738] Linking static target lib/librte_pipeline.a 00:02:30.205 [634/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.205 [635/738] Linking static target lib/librte_vhost.a 00:02:30.507 [636/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.507 [637/738] Linking target app/dpdk-test-pipeline 00:02:30.507 [638/738] Linking target app/dpdk-test-flow-perf 00:02:30.507 [639/738] Linking target app/dpdk-test-acl 00:02:30.768 [640/738] Linking target app/dpdk-test-sad 00:02:30.768 [641/738] Linking target app/dpdk-test-regex 00:02:30.768 [642/738] Linking target app/dpdk-test-compress-perf 00:02:30.768 [643/738] Linking target app/dpdk-test-security-perf 00:02:30.768 [644/738] Linking target app/dpdk-testpmd 00:02:30.768 [645/738] Linking target app/dpdk-test-gpudev 00:02:30.768 [646/738] Linking target app/dpdk-dumpcap 00:02:30.768 [647/738] Linking target app/dpdk-pdump 00:02:30.768 [648/738] Linking target app/dpdk-test-cmdline 00:02:30.768 [649/738] Linking target app/dpdk-proc-info 00:02:30.768 [650/738] Linking target app/dpdk-test-fib 00:02:30.768 [651/738] Linking target app/dpdk-test-bbdev 00:02:30.768 [652/738] Linking target app/dpdk-test-crypto-perf 00:02:30.768 [653/738] Linking target app/dpdk-test-eventdev 00:02:32.680 [654/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.621 [655/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.621 [656/738] Linking target lib/librte_eal.so.23.0 00:02:33.920 [657/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:33.920 [658/738] Linking target lib/librte_pci.so.23.0 00:02:33.920 [659/738] Linking target lib/librte_ring.so.23.0 00:02:33.920 [660/738] Linking target lib/librte_cfgfile.so.23.0 00:02:33.920 [661/738] Linking target lib/librte_timer.so.23.0 00:02:33.920 [662/738] Linking target lib/librte_meter.so.23.0 00:02:33.920 [663/738] Linking target lib/librte_jobstats.so.23.0 00:02:33.920 [664/738] Linking target lib/librte_dmadev.so.23.0 00:02:33.920 [665/738] Linking target lib/librte_rawdev.so.23.0 00:02:33.920 [666/738] Linking target lib/librte_stack.so.23.0 00:02:33.920 [667/738] Linking target lib/librte_graph.so.23.0 00:02:33.920 [668/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:33.920 [669/738] Linking target lib/librte_acl.so.23.0 00:02:34.233 [670/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:34.233 [671/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:34.233 [672/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:34.233 [673/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:34.233 [674/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:34.233 [675/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:34.233 [676/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:34.233 [677/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:34.233 [678/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:34.233 [679/738] Linking target lib/librte_rcu.so.23.0 00:02:34.233 [680/738] Linking target lib/librte_mempool.so.23.0 00:02:34.233 [681/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:34.233 [682/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:34.233 [683/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:34.233 [684/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:34.233 [685/738] Linking target lib/librte_rib.so.23.0 00:02:34.233 [686/738] Linking target lib/librte_mbuf.so.23.0 00:02:34.494 [687/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:34.494 [688/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:34.494 [689/738] Linking target lib/librte_gpudev.so.23.0 00:02:34.494 [690/738] Linking target lib/librte_compressdev.so.23.0 00:02:34.494 [691/738] Linking target lib/librte_net.so.23.0 00:02:34.494 [692/738] Linking target lib/librte_cryptodev.so.23.0 00:02:34.494 [693/738] Linking target lib/librte_bbdev.so.23.0 00:02:34.494 [694/738] Linking target lib/librte_regexdev.so.23.0 00:02:34.494 [695/738] Linking target lib/librte_reorder.so.23.0 00:02:34.494 [696/738] Linking target lib/librte_distributor.so.23.0 00:02:34.494 [697/738] Linking target lib/librte_sched.so.23.0 00:02:34.494 [698/738] Linking target lib/librte_fib.so.23.0 00:02:34.754 [699/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:34.754 [700/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:34.754 [701/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:34.754 [702/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.754 [703/738] Linking target lib/librte_hash.so.23.0 00:02:34.754 [704/738] Linking target lib/librte_cmdline.so.23.0 00:02:34.754 [705/738] Linking target lib/librte_security.so.23.0 00:02:34.754 [706/738] Linking target lib/librte_ethdev.so.23.0 00:02:34.754 [707/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:34.754 [708/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:34.754 [709/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:35.016 [710/738] Linking target lib/librte_member.so.23.0 00:02:35.016 [711/738] Linking target lib/librte_lpm.so.23.0 00:02:35.016 [712/738] Linking target lib/librte_efd.so.23.0 00:02:35.016 [713/738] Linking target lib/librte_metrics.so.23.0 00:02:35.016 [714/738] Linking target lib/librte_bpf.so.23.0 00:02:35.016 [715/738] Linking target lib/librte_pcapng.so.23.0 00:02:35.016 [716/738] Linking target lib/librte_gso.so.23.0 00:02:35.016 [717/738] Linking target lib/librte_gro.so.23.0 00:02:35.016 [718/738] Linking target lib/librte_ip_frag.so.23.0 00:02:35.016 [719/738] Linking target lib/librte_ipsec.so.23.0 00:02:35.016 [720/738] Linking target lib/librte_power.so.23.0 00:02:35.016 [721/738] Linking target lib/librte_eventdev.so.23.0 00:02:35.016 [722/738] Linking target lib/librte_vhost.so.23.0 00:02:35.016 [723/738] Linking target drivers/librte_net_i40e.so.23.0 00:02:35.016 [724/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:35.016 [725/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:35.016 [726/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:35.016 [727/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:35.016 [728/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:35.016 [729/738] Linking target lib/librte_node.so.23.0 00:02:35.016 [730/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:35.016 [731/738] Linking target lib/librte_bitratestats.so.23.0 00:02:35.016 [732/738] Linking target lib/librte_latencystats.so.23.0 00:02:35.016 [733/738] Linking target lib/librte_port.so.23.0 00:02:35.016 [734/738] Linking target lib/librte_pdump.so.23.0 00:02:35.277 [735/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:35.277 [736/738] Linking target lib/librte_table.so.23.0 00:02:35.538 [737/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:35.538 [738/738] Linking target lib/librte_pipeline.so.23.0 00:02:35.538 10:18:41 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:35.538 10:18:41 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:35.538 10:18:41 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:35.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:35.538 [0/1] Installing files. 00:02:35.805 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.807 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.810 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.810 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.811 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.075 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.075 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.075 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.075 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.075 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.077 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.078 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.079 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:36.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:36.080 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:36.080 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:36.080 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:36.080 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:36.080 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:36.080 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:36.080 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:36.080 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:36.080 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:36.080 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:36.080 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:36.080 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:36.080 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:36.080 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:36.080 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:36.080 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:36.080 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:36.080 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:36.080 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:36.080 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:36.080 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:36.080 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:36.080 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:36.080 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:36.080 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:36.080 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:36.080 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:36.080 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:36.080 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:36.080 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:36.080 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:36.080 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:36.080 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:36.080 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:36.080 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:36.080 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:36.080 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:36.080 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:36.080 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:36.080 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:36.080 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:36.080 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:36.080 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:36.080 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:36.080 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:36.080 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:36.080 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:36.080 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:36.080 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:36.080 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:36.080 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:36.080 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:36.080 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:36.080 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:36.080 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:36.080 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:36.080 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:36.080 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:36.080 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:36.080 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:36.080 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:36.080 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:36.080 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:36.080 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:36.080 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:36.080 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:36.081 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:36.081 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:36.081 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:36.081 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:36.081 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:36.081 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:36.081 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:36.081 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:36.081 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:36.081 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:36.081 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:36.081 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:36.081 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:36.081 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:36.081 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:36.081 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:36.081 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:36.081 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:36.081 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:36.081 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:36.081 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:36.081 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:36.081 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:36.081 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:36.081 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:36.081 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:36.081 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:36.081 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:36.081 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:36.081 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:36.081 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:36.081 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:36.081 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:36.081 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:36.081 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:36.081 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:36.081 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:36.081 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:36.081 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:36.081 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:36.081 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:36.081 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:36.081 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:36.081 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:36.081 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:36.081 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:36.081 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:36.081 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:36.081 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:36.081 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:36.081 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:36.081 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:36.081 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:36.081 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:36.081 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:36.081 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:36.081 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:36.081 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:36.081 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:36.081 10:18:41 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:36.081 10:18:41 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.081 00:02:36.081 real 0m23.217s 00:02:36.081 user 5m50.438s 00:02:36.081 sys 2m23.042s 00:02:36.081 10:18:41 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:36.081 10:18:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:36.081 ************************************ 00:02:36.081 END TEST build_native_dpdk 00:02:36.081 ************************************ 00:02:36.343 10:18:41 -- common/autotest_common.sh@1142 -- $ return 0 00:02:36.343 10:18:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.343 10:18:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.343 10:18:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:36.343 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:36.603 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.603 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.603 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:37.173 Using 'verbs' RDMA provider 00:02:52.634 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:04.861 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:04.861 Creating mk/config.mk...done. 00:03:04.861 Creating mk/cc.flags.mk...done. 00:03:04.861 Type 'make' to build. 00:03:04.861 10:19:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:03:04.861 10:19:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:04.861 10:19:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:04.861 10:19:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.861 ************************************ 00:03:04.861 START TEST make 00:03:04.861 ************************************ 00:03:04.861 10:19:10 make -- common/autotest_common.sh@1123 -- $ make -j144 00:03:04.861 make[1]: Nothing to be done for 'all'. 00:03:06.240 The Meson build system 00:03:06.240 Version: 1.3.1 00:03:06.240 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:06.240 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:06.240 Build type: native build 00:03:06.240 Project name: libvfio-user 00:03:06.240 Project version: 0.0.1 00:03:06.240 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:06.240 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:06.240 Host machine cpu family: x86_64 00:03:06.240 Host machine cpu: x86_64 00:03:06.240 Run-time dependency threads found: YES 00:03:06.240 Library dl found: YES 00:03:06.240 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:06.240 Run-time dependency json-c found: YES 0.17 00:03:06.240 Run-time dependency cmocka found: YES 1.1.7 00:03:06.240 Program pytest-3 found: NO 00:03:06.240 Program flake8 found: NO 00:03:06.240 Program misspell-fixer found: NO 00:03:06.240 Program restructuredtext-lint found: NO 00:03:06.240 Program valgrind found: YES (/usr/bin/valgrind) 00:03:06.240 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.240 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.240 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.240 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.240 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:06.240 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:06.240 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.240 Build targets in project: 8 00:03:06.240 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:06.240 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:06.240 00:03:06.240 libvfio-user 0.0.1 00:03:06.240 00:03:06.240 User defined options 00:03:06.240 buildtype : debug 00:03:06.240 default_library: shared 00:03:06.240 libdir : /usr/local/lib 00:03:06.240 00:03:06.240 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.497 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:06.497 [1/37] Compiling C object samples/null.p/null.c.o 00:03:06.497 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:06.498 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:06.498 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:06.498 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:06.498 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:06.498 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:06.498 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:06.498 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:06.498 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:06.498 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:06.498 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:06.498 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:06.498 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:06.498 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:06.498 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:06.498 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:06.498 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:06.498 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:06.498 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:06.498 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:06.498 [22/37] Compiling C object samples/server.p/server.c.o 00:03:06.498 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:06.498 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:06.498 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:06.755 [26/37] Compiling C object samples/client.p/client.c.o 00:03:06.755 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:06.755 [28/37] Linking target samples/client 00:03:06.755 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:06.755 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:06.755 [31/37] Linking target test/unit_tests 00:03:06.755 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:07.014 [33/37] Linking target samples/server 00:03:07.014 [34/37] Linking target samples/lspci 00:03:07.014 [35/37] Linking target samples/null 00:03:07.014 [36/37] Linking target samples/gpio-pci-idio-16 00:03:07.014 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:07.014 INFO: autodetecting backend as ninja 00:03:07.014 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.014 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.273 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:07.273 ninja: no work to do. 00:03:15.406 CC lib/log/log.o 00:03:15.406 CC lib/log/log_flags.o 00:03:15.406 CC lib/log/log_deprecated.o 00:03:15.406 CC lib/ut_mock/mock.o 00:03:15.406 CC lib/ut/ut.o 00:03:15.406 LIB libspdk_log.a 00:03:15.406 LIB libspdk_ut_mock.a 00:03:15.406 LIB libspdk_ut.a 00:03:15.406 SO libspdk_log.so.7.0 00:03:15.406 SO libspdk_ut.so.2.0 00:03:15.406 SO libspdk_ut_mock.so.6.0 00:03:15.406 SYMLINK libspdk_ut.so 00:03:15.406 SYMLINK libspdk_ut_mock.so 00:03:15.406 SYMLINK libspdk_log.so 00:03:15.666 CC lib/ioat/ioat.o 00:03:15.666 CC lib/dma/dma.o 00:03:15.666 CC lib/util/base64.o 00:03:15.666 CC lib/util/bit_array.o 00:03:15.666 CXX lib/trace_parser/trace.o 00:03:15.666 CC lib/util/cpuset.o 00:03:15.666 CC lib/util/crc16.o 00:03:15.666 CC lib/util/crc32.o 00:03:15.666 CC lib/util/crc32c.o 00:03:15.666 CC lib/util/crc32_ieee.o 00:03:15.666 CC lib/util/crc64.o 00:03:15.666 CC lib/util/dif.o 00:03:15.666 CC lib/util/fd.o 00:03:15.666 CC lib/util/file.o 00:03:15.666 CC lib/util/fd_group.o 00:03:15.666 CC lib/util/hexlify.o 00:03:15.666 CC lib/util/iov.o 00:03:15.666 CC lib/util/math.o 00:03:15.666 CC lib/util/net.o 00:03:15.666 CC lib/util/pipe.o 00:03:15.666 CC lib/util/strerror_tls.o 00:03:15.666 CC lib/util/string.o 00:03:15.666 CC lib/util/uuid.o 00:03:15.666 CC lib/util/xor.o 00:03:15.666 CC lib/util/zipf.o 00:03:15.928 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.928 CC lib/vfio_user/host/vfio_user.o 00:03:15.928 LIB libspdk_dma.a 00:03:15.928 SO libspdk_dma.so.4.0 00:03:15.928 LIB libspdk_ioat.a 00:03:15.928 SO libspdk_ioat.so.7.0 00:03:15.928 SYMLINK libspdk_dma.so 00:03:15.928 SYMLINK libspdk_ioat.so 00:03:15.928 LIB libspdk_vfio_user.a 00:03:16.190 SO libspdk_vfio_user.so.5.0 00:03:16.190 LIB libspdk_util.a 00:03:16.190 SYMLINK libspdk_vfio_user.so 00:03:16.190 SO libspdk_util.so.10.0 00:03:16.450 SYMLINK libspdk_util.so 00:03:16.450 LIB libspdk_trace_parser.a 00:03:16.450 SO libspdk_trace_parser.so.5.0 00:03:16.711 SYMLINK libspdk_trace_parser.so 00:03:16.711 CC lib/json/json_parse.o 00:03:16.711 CC lib/json/json_util.o 00:03:16.711 CC lib/json/json_write.o 00:03:16.711 CC lib/conf/conf.o 00:03:16.711 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.711 CC lib/rdma_provider/common.o 00:03:16.711 CC lib/idxd/idxd.o 00:03:16.711 CC lib/idxd/idxd_user.o 00:03:16.711 CC lib/idxd/idxd_kernel.o 00:03:16.711 CC lib/rdma_utils/rdma_utils.o 00:03:16.711 CC lib/env_dpdk/env.o 00:03:16.711 CC lib/env_dpdk/memory.o 00:03:16.711 CC lib/vmd/vmd.o 00:03:16.711 CC lib/env_dpdk/init.o 00:03:16.711 CC lib/vmd/led.o 00:03:16.711 CC lib/env_dpdk/pci.o 00:03:16.711 CC lib/env_dpdk/threads.o 00:03:16.711 CC lib/env_dpdk/pci_ioat.o 00:03:16.711 CC lib/env_dpdk/pci_idxd.o 00:03:16.711 CC lib/env_dpdk/pci_virtio.o 00:03:16.711 CC lib/env_dpdk/pci_vmd.o 00:03:16.711 CC lib/env_dpdk/sigbus_handler.o 00:03:16.711 CC lib/env_dpdk/pci_event.o 00:03:16.711 CC lib/env_dpdk/pci_dpdk.o 00:03:16.711 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.711 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.971 LIB libspdk_rdma_provider.a 00:03:16.971 LIB libspdk_conf.a 00:03:16.971 SO libspdk_conf.so.6.0 00:03:16.971 SO libspdk_rdma_provider.so.6.0 00:03:16.971 LIB libspdk_rdma_utils.a 00:03:16.971 LIB libspdk_json.a 00:03:16.971 SO libspdk_rdma_utils.so.1.0 00:03:16.971 SYMLINK libspdk_conf.so 00:03:16.971 SO libspdk_json.so.6.0 00:03:16.971 SYMLINK libspdk_rdma_provider.so 00:03:17.231 SYMLINK libspdk_rdma_utils.so 00:03:17.231 SYMLINK libspdk_json.so 00:03:17.231 LIB libspdk_idxd.a 00:03:17.231 SO libspdk_idxd.so.12.0 00:03:17.231 LIB libspdk_vmd.a 00:03:17.231 SYMLINK libspdk_idxd.so 00:03:17.231 SO libspdk_vmd.so.6.0 00:03:17.491 SYMLINK libspdk_vmd.so 00:03:17.491 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.491 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.491 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.491 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.752 LIB libspdk_jsonrpc.a 00:03:17.752 SO libspdk_jsonrpc.so.6.0 00:03:17.752 SYMLINK libspdk_jsonrpc.so 00:03:18.012 LIB libspdk_env_dpdk.a 00:03:18.012 SO libspdk_env_dpdk.so.14.1 00:03:18.012 SYMLINK libspdk_env_dpdk.so 00:03:18.271 CC lib/rpc/rpc.o 00:03:18.530 LIB libspdk_rpc.a 00:03:18.530 SO libspdk_rpc.so.6.0 00:03:18.530 SYMLINK libspdk_rpc.so 00:03:18.790 CC lib/notify/notify.o 00:03:18.790 CC lib/notify/notify_rpc.o 00:03:18.790 CC lib/trace/trace.o 00:03:18.790 CC lib/trace/trace_flags.o 00:03:18.790 CC lib/trace/trace_rpc.o 00:03:18.790 CC lib/keyring/keyring.o 00:03:18.790 CC lib/keyring/keyring_rpc.o 00:03:19.050 LIB libspdk_notify.a 00:03:19.050 SO libspdk_notify.so.6.0 00:03:19.050 LIB libspdk_keyring.a 00:03:19.050 LIB libspdk_trace.a 00:03:19.050 SO libspdk_keyring.so.1.0 00:03:19.050 SO libspdk_trace.so.10.0 00:03:19.050 SYMLINK libspdk_notify.so 00:03:19.310 SYMLINK libspdk_keyring.so 00:03:19.310 SYMLINK libspdk_trace.so 00:03:19.570 CC lib/thread/thread.o 00:03:19.570 CC lib/thread/iobuf.o 00:03:19.570 CC lib/sock/sock.o 00:03:19.570 CC lib/sock/sock_rpc.o 00:03:19.830 LIB libspdk_sock.a 00:03:20.090 SO libspdk_sock.so.10.0 00:03:20.090 SYMLINK libspdk_sock.so 00:03:20.351 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.351 CC lib/nvme/nvme_ctrlr.o 00:03:20.351 CC lib/nvme/nvme_fabric.o 00:03:20.351 CC lib/nvme/nvme_ns_cmd.o 00:03:20.351 CC lib/nvme/nvme_ns.o 00:03:20.351 CC lib/nvme/nvme_pcie_common.o 00:03:20.351 CC lib/nvme/nvme_pcie.o 00:03:20.351 CC lib/nvme/nvme_qpair.o 00:03:20.351 CC lib/nvme/nvme.o 00:03:20.351 CC lib/nvme/nvme_quirks.o 00:03:20.351 CC lib/nvme/nvme_transport.o 00:03:20.351 CC lib/nvme/nvme_discovery.o 00:03:20.351 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.351 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.351 CC lib/nvme/nvme_tcp.o 00:03:20.351 CC lib/nvme/nvme_opal.o 00:03:20.351 CC lib/nvme/nvme_io_msg.o 00:03:20.351 CC lib/nvme/nvme_stubs.o 00:03:20.351 CC lib/nvme/nvme_poll_group.o 00:03:20.351 CC lib/nvme/nvme_zns.o 00:03:20.351 CC lib/nvme/nvme_cuse.o 00:03:20.351 CC lib/nvme/nvme_auth.o 00:03:20.351 CC lib/nvme/nvme_vfio_user.o 00:03:20.351 CC lib/nvme/nvme_rdma.o 00:03:20.921 LIB libspdk_thread.a 00:03:20.921 SO libspdk_thread.so.10.1 00:03:20.921 SYMLINK libspdk_thread.so 00:03:21.181 CC lib/accel/accel.o 00:03:21.181 CC lib/accel/accel_rpc.o 00:03:21.181 CC lib/accel/accel_sw.o 00:03:21.181 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.181 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.181 CC lib/virtio/virtio.o 00:03:21.181 CC lib/blob/blobstore.o 00:03:21.181 CC lib/virtio/virtio_vhost_user.o 00:03:21.181 CC lib/blob/request.o 00:03:21.181 CC lib/virtio/virtio_vfio_user.o 00:03:21.181 CC lib/init/json_config.o 00:03:21.181 CC lib/blob/blob_bs_dev.o 00:03:21.181 CC lib/blob/zeroes.o 00:03:21.181 CC lib/virtio/virtio_pci.o 00:03:21.181 CC lib/init/subsystem.o 00:03:21.181 CC lib/init/subsystem_rpc.o 00:03:21.181 CC lib/init/rpc.o 00:03:21.440 LIB libspdk_init.a 00:03:21.440 SO libspdk_init.so.5.0 00:03:21.440 LIB libspdk_vfu_tgt.a 00:03:21.700 LIB libspdk_virtio.a 00:03:21.700 SO libspdk_vfu_tgt.so.3.0 00:03:21.700 SO libspdk_virtio.so.7.0 00:03:21.700 SYMLINK libspdk_init.so 00:03:21.700 SYMLINK libspdk_vfu_tgt.so 00:03:21.700 SYMLINK libspdk_virtio.so 00:03:21.960 CC lib/event/app.o 00:03:21.960 CC lib/event/reactor.o 00:03:21.960 CC lib/event/log_rpc.o 00:03:21.960 CC lib/event/app_rpc.o 00:03:21.960 CC lib/event/scheduler_static.o 00:03:21.960 LIB libspdk_accel.a 00:03:22.221 SO libspdk_accel.so.16.0 00:03:22.221 LIB libspdk_nvme.a 00:03:22.221 SYMLINK libspdk_accel.so 00:03:22.221 SO libspdk_nvme.so.13.1 00:03:22.221 LIB libspdk_event.a 00:03:22.481 SO libspdk_event.so.14.0 00:03:22.481 SYMLINK libspdk_event.so 00:03:22.481 SYMLINK libspdk_nvme.so 00:03:22.481 CC lib/bdev/bdev.o 00:03:22.481 CC lib/bdev/bdev_rpc.o 00:03:22.481 CC lib/bdev/bdev_zone.o 00:03:22.481 CC lib/bdev/part.o 00:03:22.481 CC lib/bdev/scsi_nvme.o 00:03:23.863 LIB libspdk_blob.a 00:03:23.863 SO libspdk_blob.so.11.0 00:03:23.863 SYMLINK libspdk_blob.so 00:03:24.124 CC lib/lvol/lvol.o 00:03:24.124 CC lib/blobfs/blobfs.o 00:03:24.124 CC lib/blobfs/tree.o 00:03:24.693 LIB libspdk_bdev.a 00:03:24.693 SO libspdk_bdev.so.16.0 00:03:24.693 LIB libspdk_blobfs.a 00:03:24.693 SO libspdk_blobfs.so.10.0 00:03:24.693 SYMLINK libspdk_bdev.so 00:03:24.953 LIB libspdk_lvol.a 00:03:24.953 SO libspdk_lvol.so.10.0 00:03:24.953 SYMLINK libspdk_blobfs.so 00:03:24.953 SYMLINK libspdk_lvol.so 00:03:25.212 CC lib/nvmf/ctrlr.o 00:03:25.212 CC lib/nvmf/ctrlr_discovery.o 00:03:25.212 CC lib/nvmf/ctrlr_bdev.o 00:03:25.212 CC lib/ublk/ublk.o 00:03:25.212 CC lib/nbd/nbd.o 00:03:25.212 CC lib/ublk/ublk_rpc.o 00:03:25.212 CC lib/nvmf/subsystem.o 00:03:25.212 CC lib/nbd/nbd_rpc.o 00:03:25.212 CC lib/nvmf/nvmf.o 00:03:25.212 CC lib/ftl/ftl_core.o 00:03:25.212 CC lib/nvmf/nvmf_rpc.o 00:03:25.212 CC lib/ftl/ftl_init.o 00:03:25.212 CC lib/scsi/dev.o 00:03:25.212 CC lib/nvmf/transport.o 00:03:25.212 CC lib/ftl/ftl_layout.o 00:03:25.212 CC lib/scsi/lun.o 00:03:25.212 CC lib/nvmf/tcp.o 00:03:25.212 CC lib/ftl/ftl_debug.o 00:03:25.212 CC lib/nvmf/stubs.o 00:03:25.212 CC lib/scsi/port.o 00:03:25.212 CC lib/ftl/ftl_io.o 00:03:25.212 CC lib/nvmf/mdns_server.o 00:03:25.212 CC lib/scsi/scsi.o 00:03:25.212 CC lib/ftl/ftl_sb.o 00:03:25.212 CC lib/nvmf/vfio_user.o 00:03:25.212 CC lib/scsi/scsi_bdev.o 00:03:25.212 CC lib/nvmf/rdma.o 00:03:25.212 CC lib/ftl/ftl_l2p.o 00:03:25.212 CC lib/scsi/scsi_pr.o 00:03:25.212 CC lib/ftl/ftl_nv_cache.o 00:03:25.212 CC lib/ftl/ftl_l2p_flat.o 00:03:25.212 CC lib/nvmf/auth.o 00:03:25.212 CC lib/scsi/scsi_rpc.o 00:03:25.212 CC lib/ftl/ftl_band.o 00:03:25.212 CC lib/scsi/task.o 00:03:25.212 CC lib/ftl/ftl_band_ops.o 00:03:25.212 CC lib/ftl/ftl_writer.o 00:03:25.212 CC lib/ftl/ftl_rq.o 00:03:25.212 CC lib/ftl/ftl_reloc.o 00:03:25.212 CC lib/ftl/ftl_l2p_cache.o 00:03:25.212 CC lib/ftl/ftl_p2l.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.212 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.212 CC lib/ftl/utils/ftl_conf.o 00:03:25.212 CC lib/ftl/utils/ftl_md.o 00:03:25.212 CC lib/ftl/utils/ftl_mempool.o 00:03:25.212 CC lib/ftl/utils/ftl_property.o 00:03:25.212 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.212 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.212 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.212 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.212 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.212 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.212 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.212 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.212 CC lib/ftl/base/ftl_base_dev.o 00:03:25.212 CC lib/ftl/ftl_trace.o 00:03:25.780 LIB libspdk_nbd.a 00:03:25.780 SO libspdk_nbd.so.7.0 00:03:25.780 SYMLINK libspdk_nbd.so 00:03:25.780 LIB libspdk_scsi.a 00:03:25.780 SO libspdk_scsi.so.9.0 00:03:25.780 LIB libspdk_ublk.a 00:03:26.040 SO libspdk_ublk.so.3.0 00:03:26.040 SYMLINK libspdk_scsi.so 00:03:26.040 SYMLINK libspdk_ublk.so 00:03:26.040 LIB libspdk_ftl.a 00:03:26.300 CC lib/iscsi/conn.o 00:03:26.300 CC lib/iscsi/init_grp.o 00:03:26.300 CC lib/iscsi/param.o 00:03:26.300 CC lib/iscsi/iscsi.o 00:03:26.300 CC lib/vhost/vhost.o 00:03:26.300 CC lib/iscsi/md5.o 00:03:26.300 CC lib/iscsi/tgt_node.o 00:03:26.300 CC lib/vhost/vhost_rpc.o 00:03:26.300 SO libspdk_ftl.so.9.0 00:03:26.300 CC lib/iscsi/portal_grp.o 00:03:26.300 CC lib/vhost/vhost_scsi.o 00:03:26.300 CC lib/vhost/rte_vhost_user.o 00:03:26.300 CC lib/iscsi/iscsi_subsystem.o 00:03:26.300 CC lib/vhost/vhost_blk.o 00:03:26.300 CC lib/iscsi/iscsi_rpc.o 00:03:26.300 CC lib/iscsi/task.o 00:03:26.559 SYMLINK libspdk_ftl.so 00:03:26.818 LIB libspdk_nvmf.a 00:03:27.078 SO libspdk_nvmf.so.19.0 00:03:27.078 LIB libspdk_vhost.a 00:03:27.078 SYMLINK libspdk_nvmf.so 00:03:27.340 SO libspdk_vhost.so.8.0 00:03:27.340 SYMLINK libspdk_vhost.so 00:03:27.340 LIB libspdk_iscsi.a 00:03:27.604 SO libspdk_iscsi.so.8.0 00:03:27.604 SYMLINK libspdk_iscsi.so 00:03:28.263 CC module/vfu_device/vfu_virtio.o 00:03:28.263 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.263 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.263 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.263 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.263 LIB libspdk_env_dpdk_rpc.a 00:03:28.263 CC module/accel/iaa/accel_iaa.o 00:03:28.263 CC module/accel/dsa/accel_dsa.o 00:03:28.263 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.263 CC module/accel/error/accel_error.o 00:03:28.263 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.263 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.263 CC module/accel/ioat/accel_ioat.o 00:03:28.263 CC module/accel/error/accel_error_rpc.o 00:03:28.263 CC module/keyring/file/keyring.o 00:03:28.263 CC module/keyring/linux/keyring_rpc.o 00:03:28.263 CC module/keyring/linux/keyring.o 00:03:28.263 CC module/keyring/file/keyring_rpc.o 00:03:28.263 CC module/blob/bdev/blob_bdev.o 00:03:28.263 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.263 CC module/sock/posix/posix.o 00:03:28.263 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.263 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.263 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.522 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.522 LIB libspdk_scheduler_gscheduler.a 00:03:28.522 LIB libspdk_keyring_linux.a 00:03:28.522 LIB libspdk_keyring_file.a 00:03:28.522 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.522 LIB libspdk_accel_iaa.a 00:03:28.522 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.522 LIB libspdk_accel_error.a 00:03:28.522 LIB libspdk_accel_ioat.a 00:03:28.522 SO libspdk_keyring_linux.so.1.0 00:03:28.522 SO libspdk_keyring_file.so.1.0 00:03:28.522 LIB libspdk_scheduler_dynamic.a 00:03:28.522 SO libspdk_accel_ioat.so.6.0 00:03:28.522 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.522 SO libspdk_accel_iaa.so.3.0 00:03:28.522 SO libspdk_accel_error.so.2.0 00:03:28.522 LIB libspdk_accel_dsa.a 00:03:28.782 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.782 LIB libspdk_blob_bdev.a 00:03:28.782 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.782 SYMLINK libspdk_keyring_linux.so 00:03:28.782 SYMLINK libspdk_keyring_file.so 00:03:28.782 SO libspdk_accel_dsa.so.5.0 00:03:28.782 SO libspdk_blob_bdev.so.11.0 00:03:28.782 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.782 SYMLINK libspdk_accel_ioat.so 00:03:28.782 SYMLINK libspdk_accel_iaa.so 00:03:28.782 SYMLINK libspdk_accel_error.so 00:03:28.782 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.782 SYMLINK libspdk_accel_dsa.so 00:03:28.782 SYMLINK libspdk_blob_bdev.so 00:03:28.782 LIB libspdk_vfu_device.a 00:03:28.782 SO libspdk_vfu_device.so.3.0 00:03:29.041 SYMLINK libspdk_vfu_device.so 00:03:29.041 LIB libspdk_sock_posix.a 00:03:29.041 SO libspdk_sock_posix.so.6.0 00:03:29.321 SYMLINK libspdk_sock_posix.so 00:03:29.321 CC module/bdev/delay/vbdev_delay.o 00:03:29.321 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.321 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.321 CC module/bdev/malloc/bdev_malloc.o 00:03:29.321 CC module/bdev/error/vbdev_error.o 00:03:29.321 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.321 CC module/bdev/aio/bdev_aio.o 00:03:29.321 CC module/bdev/gpt/gpt.o 00:03:29.321 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.321 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.321 CC module/bdev/raid/bdev_raid.o 00:03:29.321 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.321 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.321 CC module/bdev/raid/raid0.o 00:03:29.321 CC module/bdev/raid/raid1.o 00:03:29.321 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.321 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.321 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.321 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.321 CC module/bdev/raid/concat.o 00:03:29.321 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.321 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.321 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.321 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.321 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.321 CC module/bdev/null/bdev_null.o 00:03:29.321 CC module/bdev/nvme/bdev_nvme.o 00:03:29.321 CC module/bdev/ftl/bdev_ftl.o 00:03:29.321 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.321 CC module/bdev/null/bdev_null_rpc.o 00:03:29.321 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.321 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.321 CC module/bdev/nvme/nvme_rpc.o 00:03:29.321 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.321 CC module/bdev/split/vbdev_split.o 00:03:29.321 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.321 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.321 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.321 CC module/bdev/nvme/vbdev_opal.o 00:03:29.321 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.321 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.321 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.580 LIB libspdk_blobfs_bdev.a 00:03:29.580 LIB libspdk_bdev_error.a 00:03:29.580 SO libspdk_blobfs_bdev.so.6.0 00:03:29.580 LIB libspdk_bdev_gpt.a 00:03:29.580 LIB libspdk_bdev_split.a 00:03:29.580 LIB libspdk_bdev_null.a 00:03:29.580 SO libspdk_bdev_gpt.so.6.0 00:03:29.580 SO libspdk_bdev_split.so.6.0 00:03:29.580 SO libspdk_bdev_error.so.6.0 00:03:29.580 LIB libspdk_bdev_passthru.a 00:03:29.580 SYMLINK libspdk_blobfs_bdev.so 00:03:29.580 LIB libspdk_bdev_ftl.a 00:03:29.580 SO libspdk_bdev_null.so.6.0 00:03:29.580 LIB libspdk_bdev_malloc.a 00:03:29.580 LIB libspdk_bdev_delay.a 00:03:29.580 SO libspdk_bdev_passthru.so.6.0 00:03:29.580 LIB libspdk_bdev_aio.a 00:03:29.840 SO libspdk_bdev_ftl.so.6.0 00:03:29.840 SYMLINK libspdk_bdev_error.so 00:03:29.840 LIB libspdk_bdev_zone_block.a 00:03:29.840 SYMLINK libspdk_bdev_split.so 00:03:29.840 SO libspdk_bdev_delay.so.6.0 00:03:29.840 SO libspdk_bdev_malloc.so.6.0 00:03:29.840 SO libspdk_bdev_aio.so.6.0 00:03:29.840 SYMLINK libspdk_bdev_gpt.so 00:03:29.840 LIB libspdk_bdev_iscsi.a 00:03:29.840 SYMLINK libspdk_bdev_null.so 00:03:29.840 SO libspdk_bdev_zone_block.so.6.0 00:03:29.840 SYMLINK libspdk_bdev_passthru.so 00:03:29.840 SO libspdk_bdev_iscsi.so.6.0 00:03:29.840 SYMLINK libspdk_bdev_ftl.so 00:03:29.840 SYMLINK libspdk_bdev_delay.so 00:03:29.840 SYMLINK libspdk_bdev_malloc.so 00:03:29.840 SYMLINK libspdk_bdev_aio.so 00:03:29.840 LIB libspdk_bdev_virtio.a 00:03:29.840 SYMLINK libspdk_bdev_zone_block.so 00:03:29.840 LIB libspdk_bdev_lvol.a 00:03:29.840 SYMLINK libspdk_bdev_iscsi.so 00:03:29.840 SO libspdk_bdev_virtio.so.6.0 00:03:29.840 SO libspdk_bdev_lvol.so.6.0 00:03:29.840 SYMLINK libspdk_bdev_virtio.so 00:03:29.840 SYMLINK libspdk_bdev_lvol.so 00:03:30.430 LIB libspdk_bdev_raid.a 00:03:30.430 SO libspdk_bdev_raid.so.6.0 00:03:30.430 SYMLINK libspdk_bdev_raid.so 00:03:31.372 LIB libspdk_bdev_nvme.a 00:03:31.372 SO libspdk_bdev_nvme.so.7.0 00:03:31.372 SYMLINK libspdk_bdev_nvme.so 00:03:32.315 CC module/event/subsystems/sock/sock.o 00:03:32.315 CC module/event/subsystems/vmd/vmd.o 00:03:32.315 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.315 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.315 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.315 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.315 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.315 CC module/event/subsystems/keyring/keyring.o 00:03:32.315 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.315 LIB libspdk_event_keyring.a 00:03:32.315 LIB libspdk_event_sock.a 00:03:32.315 LIB libspdk_event_vmd.a 00:03:32.315 LIB libspdk_event_scheduler.a 00:03:32.315 LIB libspdk_event_iobuf.a 00:03:32.315 LIB libspdk_event_vhost_blk.a 00:03:32.315 LIB libspdk_event_vfu_tgt.a 00:03:32.315 SO libspdk_event_keyring.so.1.0 00:03:32.315 SO libspdk_event_sock.so.5.0 00:03:32.315 SO libspdk_event_vmd.so.6.0 00:03:32.315 SO libspdk_event_scheduler.so.4.0 00:03:32.315 SO libspdk_event_iobuf.so.3.0 00:03:32.315 SO libspdk_event_vhost_blk.so.3.0 00:03:32.315 SO libspdk_event_vfu_tgt.so.3.0 00:03:32.315 SYMLINK libspdk_event_keyring.so 00:03:32.315 SYMLINK libspdk_event_sock.so 00:03:32.315 SYMLINK libspdk_event_vmd.so 00:03:32.315 SYMLINK libspdk_event_scheduler.so 00:03:32.315 SYMLINK libspdk_event_iobuf.so 00:03:32.315 SYMLINK libspdk_event_vhost_blk.so 00:03:32.575 SYMLINK libspdk_event_vfu_tgt.so 00:03:32.836 CC module/event/subsystems/accel/accel.o 00:03:32.836 LIB libspdk_event_accel.a 00:03:33.096 SO libspdk_event_accel.so.6.0 00:03:33.096 SYMLINK libspdk_event_accel.so 00:03:33.356 CC module/event/subsystems/bdev/bdev.o 00:03:33.617 LIB libspdk_event_bdev.a 00:03:33.617 SO libspdk_event_bdev.so.6.0 00:03:33.617 SYMLINK libspdk_event_bdev.so 00:03:33.878 CC module/event/subsystems/nbd/nbd.o 00:03:34.139 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.139 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.139 CC module/event/subsystems/scsi/scsi.o 00:03:34.139 CC module/event/subsystems/ublk/ublk.o 00:03:34.139 LIB libspdk_event_nbd.a 00:03:34.139 LIB libspdk_event_ublk.a 00:03:34.139 LIB libspdk_event_scsi.a 00:03:34.139 SO libspdk_event_nbd.so.6.0 00:03:34.139 SO libspdk_event_ublk.so.3.0 00:03:34.139 SO libspdk_event_scsi.so.6.0 00:03:34.139 LIB libspdk_event_nvmf.a 00:03:34.400 SYMLINK libspdk_event_nbd.so 00:03:34.400 SYMLINK libspdk_event_ublk.so 00:03:34.400 SYMLINK libspdk_event_scsi.so 00:03:34.400 SO libspdk_event_nvmf.so.6.0 00:03:34.400 SYMLINK libspdk_event_nvmf.so 00:03:34.660 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.660 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.921 LIB libspdk_event_iscsi.a 00:03:34.921 LIB libspdk_event_vhost_scsi.a 00:03:34.921 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.921 SO libspdk_event_iscsi.so.6.0 00:03:34.921 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.921 SYMLINK libspdk_event_iscsi.so 00:03:35.181 SO libspdk.so.6.0 00:03:35.181 SYMLINK libspdk.so 00:03:35.441 CXX app/trace/trace.o 00:03:35.441 TEST_HEADER include/spdk/accel.h 00:03:35.441 CC app/trace_record/trace_record.o 00:03:35.441 TEST_HEADER include/spdk/accel_module.h 00:03:35.441 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.441 TEST_HEADER include/spdk/assert.h 00:03:35.441 TEST_HEADER include/spdk/base64.h 00:03:35.441 TEST_HEADER include/spdk/barrier.h 00:03:35.441 TEST_HEADER include/spdk/bdev_module.h 00:03:35.441 TEST_HEADER include/spdk/bdev.h 00:03:35.441 TEST_HEADER include/spdk/bit_array.h 00:03:35.441 TEST_HEADER include/spdk/bit_pool.h 00:03:35.441 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.441 CC app/spdk_nvme_perf/perf.o 00:03:35.441 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.441 TEST_HEADER include/spdk/blobfs.h 00:03:35.441 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.442 CC app/spdk_lspci/spdk_lspci.o 00:03:35.442 TEST_HEADER include/spdk/blob.h 00:03:35.442 CC test/rpc_client/rpc_client_test.o 00:03:35.442 TEST_HEADER include/spdk/config.h 00:03:35.442 TEST_HEADER include/spdk/conf.h 00:03:35.442 CC app/spdk_nvme_identify/identify.o 00:03:35.442 CC app/spdk_top/spdk_top.o 00:03:35.442 TEST_HEADER include/spdk/cpuset.h 00:03:35.442 TEST_HEADER include/spdk/crc16.h 00:03:35.442 TEST_HEADER include/spdk/crc32.h 00:03:35.442 TEST_HEADER include/spdk/crc64.h 00:03:35.442 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:35.442 TEST_HEADER include/spdk/dif.h 00:03:35.442 TEST_HEADER include/spdk/dma.h 00:03:35.442 TEST_HEADER include/spdk/endian.h 00:03:35.442 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.442 TEST_HEADER include/spdk/event.h 00:03:35.442 TEST_HEADER include/spdk/env.h 00:03:35.442 TEST_HEADER include/spdk/fd_group.h 00:03:35.701 TEST_HEADER include/spdk/fd.h 00:03:35.701 TEST_HEADER include/spdk/file.h 00:03:35.701 TEST_HEADER include/spdk/ftl.h 00:03:35.701 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.701 TEST_HEADER include/spdk/hexlify.h 00:03:35.701 TEST_HEADER include/spdk/histogram_data.h 00:03:35.701 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.701 TEST_HEADER include/spdk/idxd.h 00:03:35.701 TEST_HEADER include/spdk/ioat.h 00:03:35.701 TEST_HEADER include/spdk/init.h 00:03:35.701 CC app/spdk_dd/spdk_dd.o 00:03:35.701 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.701 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.701 TEST_HEADER include/spdk/json.h 00:03:35.701 TEST_HEADER include/spdk/keyring.h 00:03:35.701 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.701 TEST_HEADER include/spdk/keyring_module.h 00:03:35.701 TEST_HEADER include/spdk/log.h 00:03:35.701 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.701 TEST_HEADER include/spdk/likely.h 00:03:35.701 CC app/nvmf_tgt/nvmf_main.o 00:03:35.701 TEST_HEADER include/spdk/lvol.h 00:03:35.701 TEST_HEADER include/spdk/memory.h 00:03:35.701 TEST_HEADER include/spdk/mmio.h 00:03:35.701 TEST_HEADER include/spdk/nbd.h 00:03:35.701 TEST_HEADER include/spdk/net.h 00:03:35.701 TEST_HEADER include/spdk/notify.h 00:03:35.701 TEST_HEADER include/spdk/nvme.h 00:03:35.701 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.701 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.701 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.701 CC app/spdk_tgt/spdk_tgt.o 00:03:35.701 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.701 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.701 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.701 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.701 TEST_HEADER include/spdk/nvmf.h 00:03:35.701 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.701 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.701 TEST_HEADER include/spdk/opal.h 00:03:35.701 TEST_HEADER include/spdk/opal_spec.h 00:03:35.701 TEST_HEADER include/spdk/pci_ids.h 00:03:35.701 TEST_HEADER include/spdk/queue.h 00:03:35.701 TEST_HEADER include/spdk/pipe.h 00:03:35.701 TEST_HEADER include/spdk/reduce.h 00:03:35.701 TEST_HEADER include/spdk/scheduler.h 00:03:35.701 TEST_HEADER include/spdk/rpc.h 00:03:35.701 TEST_HEADER include/spdk/scsi.h 00:03:35.701 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.701 TEST_HEADER include/spdk/stdinc.h 00:03:35.701 TEST_HEADER include/spdk/sock.h 00:03:35.701 TEST_HEADER include/spdk/string.h 00:03:35.701 TEST_HEADER include/spdk/thread.h 00:03:35.701 TEST_HEADER include/spdk/trace.h 00:03:35.701 TEST_HEADER include/spdk/tree.h 00:03:35.701 TEST_HEADER include/spdk/trace_parser.h 00:03:35.701 TEST_HEADER include/spdk/ublk.h 00:03:35.701 TEST_HEADER include/spdk/util.h 00:03:35.701 TEST_HEADER include/spdk/version.h 00:03:35.701 TEST_HEADER include/spdk/uuid.h 00:03:35.701 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.701 TEST_HEADER include/spdk/vhost.h 00:03:35.701 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.701 TEST_HEADER include/spdk/vmd.h 00:03:35.701 TEST_HEADER include/spdk/xor.h 00:03:35.701 TEST_HEADER include/spdk/zipf.h 00:03:35.701 CXX test/cpp_headers/accel.o 00:03:35.701 CXX test/cpp_headers/accel_module.o 00:03:35.701 CXX test/cpp_headers/assert.o 00:03:35.701 CXX test/cpp_headers/barrier.o 00:03:35.701 CXX test/cpp_headers/base64.o 00:03:35.701 CXX test/cpp_headers/bdev.o 00:03:35.701 CXX test/cpp_headers/bdev_module.o 00:03:35.701 CXX test/cpp_headers/bdev_zone.o 00:03:35.701 CXX test/cpp_headers/bit_array.o 00:03:35.701 CXX test/cpp_headers/blob_bdev.o 00:03:35.701 CXX test/cpp_headers/bit_pool.o 00:03:35.701 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.701 CXX test/cpp_headers/blobfs.o 00:03:35.701 CXX test/cpp_headers/blob.o 00:03:35.701 CXX test/cpp_headers/conf.o 00:03:35.701 CXX test/cpp_headers/config.o 00:03:35.701 CXX test/cpp_headers/cpuset.o 00:03:35.701 CXX test/cpp_headers/crc16.o 00:03:35.701 CXX test/cpp_headers/crc32.o 00:03:35.701 CXX test/cpp_headers/crc64.o 00:03:35.701 CXX test/cpp_headers/dif.o 00:03:35.701 CXX test/cpp_headers/dma.o 00:03:35.701 CXX test/cpp_headers/endian.o 00:03:35.701 CXX test/cpp_headers/env_dpdk.o 00:03:35.701 CXX test/cpp_headers/fd_group.o 00:03:35.701 CXX test/cpp_headers/event.o 00:03:35.701 CXX test/cpp_headers/env.o 00:03:35.701 CXX test/cpp_headers/file.o 00:03:35.701 CXX test/cpp_headers/fd.o 00:03:35.701 CXX test/cpp_headers/hexlify.o 00:03:35.701 CXX test/cpp_headers/ftl.o 00:03:35.701 CXX test/cpp_headers/gpt_spec.o 00:03:35.701 CXX test/cpp_headers/histogram_data.o 00:03:35.701 CXX test/cpp_headers/idxd_spec.o 00:03:35.701 CXX test/cpp_headers/idxd.o 00:03:35.701 CXX test/cpp_headers/ioat.o 00:03:35.701 CXX test/cpp_headers/init.o 00:03:35.701 CXX test/cpp_headers/iscsi_spec.o 00:03:35.701 CXX test/cpp_headers/ioat_spec.o 00:03:35.701 CXX test/cpp_headers/jsonrpc.o 00:03:35.701 CXX test/cpp_headers/keyring.o 00:03:35.701 CXX test/cpp_headers/json.o 00:03:35.701 CXX test/cpp_headers/likely.o 00:03:35.701 CXX test/cpp_headers/keyring_module.o 00:03:35.701 CXX test/cpp_headers/memory.o 00:03:35.701 CXX test/cpp_headers/log.o 00:03:35.701 CXX test/cpp_headers/mmio.o 00:03:35.701 CXX test/cpp_headers/net.o 00:03:35.701 CXX test/cpp_headers/lvol.o 00:03:35.701 CXX test/cpp_headers/nbd.o 00:03:35.701 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.701 CXX test/cpp_headers/nvme_intel.o 00:03:35.701 CXX test/cpp_headers/notify.o 00:03:35.701 CXX test/cpp_headers/nvme_spec.o 00:03:35.701 CXX test/cpp_headers/nvme_zns.o 00:03:35.701 CXX test/cpp_headers/nvme.o 00:03:35.701 CXX test/cpp_headers/nvmf.o 00:03:35.701 CC examples/ioat/verify/verify.o 00:03:35.701 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.701 CXX test/cpp_headers/nvmf_transport.o 00:03:35.701 CXX test/cpp_headers/nvmf_spec.o 00:03:35.701 CXX test/cpp_headers/opal_spec.o 00:03:35.701 CXX test/cpp_headers/opal.o 00:03:35.701 CXX test/cpp_headers/pipe.o 00:03:35.701 CXX test/cpp_headers/queue.o 00:03:35.701 CXX test/cpp_headers/pci_ids.o 00:03:35.701 CXX test/cpp_headers/rpc.o 00:03:35.701 CC examples/util/zipf/zipf.o 00:03:35.701 CXX test/cpp_headers/scheduler.o 00:03:35.701 CXX test/cpp_headers/reduce.o 00:03:35.701 CC examples/ioat/perf/perf.o 00:03:35.701 CXX test/cpp_headers/scsi_spec.o 00:03:35.701 CXX test/cpp_headers/scsi.o 00:03:35.701 CXX test/cpp_headers/sock.o 00:03:35.701 CXX test/cpp_headers/stdinc.o 00:03:35.701 CXX test/cpp_headers/string.o 00:03:35.701 CXX test/cpp_headers/thread.o 00:03:35.701 CC test/thread/poller_perf/poller_perf.o 00:03:35.701 CXX test/cpp_headers/trace.o 00:03:35.701 CXX test/cpp_headers/trace_parser.o 00:03:35.701 LINK spdk_lspci 00:03:35.701 CXX test/cpp_headers/ublk.o 00:03:35.701 CXX test/cpp_headers/tree.o 00:03:35.701 CC test/app/histogram_perf/histogram_perf.o 00:03:35.701 CXX test/cpp_headers/util.o 00:03:35.701 CXX test/cpp_headers/uuid.o 00:03:35.701 CXX test/cpp_headers/version.o 00:03:35.701 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.701 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.701 CXX test/cpp_headers/vhost.o 00:03:35.701 CXX test/cpp_headers/xor.o 00:03:35.701 CC test/env/vtophys/vtophys.o 00:03:35.701 CXX test/cpp_headers/vmd.o 00:03:35.701 CXX test/cpp_headers/zipf.o 00:03:35.701 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:35.701 CC test/app/jsoncat/jsoncat.o 00:03:35.701 CC test/app/stub/stub.o 00:03:35.701 CC test/env/memory/memory_ut.o 00:03:35.701 CC app/fio/nvme/fio_plugin.o 00:03:35.962 CC test/env/pci/pci_ut.o 00:03:35.962 CC test/dma/test_dma/test_dma.o 00:03:35.962 LINK spdk_nvme_discover 00:03:35.962 LINK interrupt_tgt 00:03:35.962 CC app/fio/bdev/fio_plugin.o 00:03:35.962 CC test/app/bdev_svc/bdev_svc.o 00:03:35.962 LINK rpc_client_test 00:03:35.962 LINK iscsi_tgt 00:03:35.962 LINK spdk_tgt 00:03:35.962 LINK spdk_trace_record 00:03:35.962 LINK nvmf_tgt 00:03:36.221 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.221 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.221 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.221 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.221 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.221 LINK histogram_perf 00:03:36.221 LINK spdk_trace 00:03:36.221 LINK jsoncat 00:03:36.221 LINK verify 00:03:36.221 LINK env_dpdk_post_init 00:03:36.480 LINK poller_perf 00:03:36.480 LINK spdk_dd 00:03:36.480 LINK zipf 00:03:36.480 LINK vtophys 00:03:36.480 LINK ioat_perf 00:03:36.481 LINK stub 00:03:36.481 LINK mem_callbacks 00:03:36.481 LINK bdev_svc 00:03:36.481 LINK test_dma 00:03:36.481 CC app/vhost/vhost.o 00:03:36.741 LINK pci_ut 00:03:36.741 LINK vhost_fuzz 00:03:36.741 LINK spdk_bdev 00:03:36.741 LINK spdk_nvme 00:03:36.741 LINK nvme_fuzz 00:03:36.741 LINK spdk_top 00:03:36.741 CC test/event/reactor/reactor.o 00:03:36.741 CC test/event/reactor_perf/reactor_perf.o 00:03:36.741 LINK vhost 00:03:36.741 CC test/event/event_perf/event_perf.o 00:03:36.742 CC test/event/app_repeat/app_repeat.o 00:03:36.742 LINK spdk_nvme_perf 00:03:37.001 CC test/event/scheduler/scheduler.o 00:03:37.001 LINK spdk_nvme_identify 00:03:37.001 CC examples/idxd/perf/perf.o 00:03:37.001 CC examples/sock/hello_world/hello_sock.o 00:03:37.001 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.001 CC examples/vmd/led/led.o 00:03:37.001 LINK memory_ut 00:03:37.001 CC examples/thread/thread/thread_ex.o 00:03:37.001 LINK reactor_perf 00:03:37.001 LINK reactor 00:03:37.001 LINK app_repeat 00:03:37.001 LINK event_perf 00:03:37.001 CC test/nvme/reserve/reserve.o 00:03:37.001 CC test/nvme/connect_stress/connect_stress.o 00:03:37.001 CC test/nvme/aer/aer.o 00:03:37.001 CC test/nvme/reset/reset.o 00:03:37.001 CC test/nvme/e2edp/nvme_dp.o 00:03:37.001 CC test/nvme/sgl/sgl.o 00:03:37.001 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.001 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.001 CC test/nvme/err_injection/err_injection.o 00:03:37.001 CC test/nvme/overhead/overhead.o 00:03:37.001 CC test/nvme/fdp/fdp.o 00:03:37.001 CC test/nvme/compliance/nvme_compliance.o 00:03:37.001 LINK lsvmd 00:03:37.001 CC test/nvme/boot_partition/boot_partition.o 00:03:37.001 CC test/nvme/cuse/cuse.o 00:03:37.001 CC test/nvme/simple_copy/simple_copy.o 00:03:37.001 CC test/nvme/startup/startup.o 00:03:37.001 LINK led 00:03:37.261 CC test/blobfs/mkfs/mkfs.o 00:03:37.261 CC test/accel/dif/dif.o 00:03:37.261 LINK scheduler 00:03:37.261 LINK hello_sock 00:03:37.261 LINK idxd_perf 00:03:37.261 CC test/lvol/esnap/esnap.o 00:03:37.261 LINK thread 00:03:37.261 LINK connect_stress 00:03:37.261 LINK reserve 00:03:37.261 LINK boot_partition 00:03:37.261 LINK err_injection 00:03:37.261 LINK startup 00:03:37.261 LINK fused_ordering 00:03:37.261 LINK doorbell_aers 00:03:37.261 LINK sgl 00:03:37.261 LINK simple_copy 00:03:37.261 LINK mkfs 00:03:37.261 LINK reset 00:03:37.522 LINK aer 00:03:37.522 LINK nvme_dp 00:03:37.522 LINK overhead 00:03:37.522 LINK fdp 00:03:37.522 LINK nvme_compliance 00:03:37.522 LINK dif 00:03:37.784 LINK iscsi_fuzz 00:03:37.784 CC examples/nvme/abort/abort.o 00:03:37.784 CC examples/nvme/reconnect/reconnect.o 00:03:37.784 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.784 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.784 CC examples/nvme/hotplug/hotplug.o 00:03:37.784 CC examples/nvme/arbitration/arbitration.o 00:03:37.784 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.784 CC examples/nvme/hello_world/hello_world.o 00:03:37.784 CC examples/accel/perf/accel_perf.o 00:03:37.784 CC examples/blob/hello_world/hello_blob.o 00:03:37.784 CC examples/blob/cli/blobcli.o 00:03:37.784 LINK cmb_copy 00:03:37.784 LINK pmr_persistence 00:03:38.044 LINK hello_world 00:03:38.044 LINK hotplug 00:03:38.044 LINK abort 00:03:38.044 LINK reconnect 00:03:38.044 LINK arbitration 00:03:38.045 LINK hello_blob 00:03:38.045 LINK nvme_manage 00:03:38.045 CC test/bdev/bdevio/bdevio.o 00:03:38.305 LINK accel_perf 00:03:38.305 LINK cuse 00:03:38.305 LINK blobcli 00:03:38.564 LINK bdevio 00:03:38.824 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.824 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.083 LINK hello_bdev 00:03:39.344 LINK bdevperf 00:03:40.292 CC examples/nvmf/nvmf/nvmf.o 00:03:40.292 LINK nvmf 00:03:41.232 LINK esnap 00:03:41.804 00:03:41.804 real 0m37.265s 00:03:41.804 user 5m9.326s 00:03:41.804 sys 2m50.176s 00:03:41.804 10:19:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:41.804 10:19:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.804 ************************************ 00:03:41.804 END TEST make 00:03:41.804 ************************************ 00:03:41.804 10:19:47 -- common/autotest_common.sh@1142 -- $ return 0 00:03:41.804 10:19:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.804 10:19:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.804 10:19:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.804 10:19:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.804 10:19:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.804 10:19:47 -- pm/common@44 -- $ pid=1590491 00:03:41.804 10:19:47 -- pm/common@50 -- $ kill -TERM 1590491 00:03:41.805 10:19:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.805 10:19:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.805 10:19:47 -- pm/common@44 -- $ pid=1590492 00:03:41.805 10:19:47 -- pm/common@50 -- $ kill -TERM 1590492 00:03:41.805 10:19:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.805 10:19:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:41.805 10:19:47 -- pm/common@44 -- $ pid=1590494 00:03:41.805 10:19:47 -- pm/common@50 -- $ kill -TERM 1590494 00:03:41.805 10:19:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.805 10:19:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:41.805 10:19:47 -- pm/common@44 -- $ pid=1590511 00:03:41.805 10:19:47 -- pm/common@50 -- $ sudo -E kill -TERM 1590511 00:03:41.805 10:19:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.805 10:19:47 -- nvmf/common.sh@7 -- # uname -s 00:03:42.066 10:19:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.066 10:19:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.066 10:19:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.066 10:19:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.066 10:19:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.066 10:19:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.066 10:19:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.066 10:19:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.066 10:19:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.066 10:19:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.066 10:19:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:42.066 10:19:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:42.066 10:19:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.066 10:19:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.066 10:19:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:42.066 10:19:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.066 10:19:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.066 10:19:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.066 10:19:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.066 10:19:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.066 10:19:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.066 10:19:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.066 10:19:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.066 10:19:47 -- paths/export.sh@5 -- # export PATH 00:03:42.066 10:19:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.066 10:19:47 -- nvmf/common.sh@47 -- # : 0 00:03:42.066 10:19:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:42.066 10:19:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:42.066 10:19:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.066 10:19:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.066 10:19:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.066 10:19:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:42.066 10:19:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:42.066 10:19:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:42.066 10:19:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.066 10:19:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.066 10:19:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.066 10:19:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.066 10:19:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.066 10:19:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.066 10:19:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.066 10:19:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.066 10:19:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.066 10:19:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.066 10:19:47 -- spdk/autotest.sh@48 -- # udevadm_pid=1666582 00:03:42.066 10:19:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.066 10:19:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.067 10:19:47 -- pm/common@17 -- # local monitor 00:03:42.067 10:19:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.067 10:19:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.067 10:19:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.067 10:19:47 -- pm/common@21 -- # date +%s 00:03:42.067 10:19:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.067 10:19:47 -- pm/common@21 -- # date +%s 00:03:42.067 10:19:47 -- pm/common@25 -- # sleep 1 00:03:42.067 10:19:47 -- pm/common@21 -- # date +%s 00:03:42.067 10:19:47 -- pm/common@21 -- # date +%s 00:03:42.067 10:19:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721636387 00:03:42.067 10:19:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721636387 00:03:42.067 10:19:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721636387 00:03:42.067 10:19:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721636387 00:03:42.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721636387_collect-vmstat.pm.log 00:03:42.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721636387_collect-cpu-load.pm.log 00:03:42.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721636387_collect-cpu-temp.pm.log 00:03:42.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721636387_collect-bmc-pm.bmc.pm.log 00:03:43.010 10:19:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.010 10:19:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:43.010 10:19:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.010 10:19:48 -- common/autotest_common.sh@10 -- # set +x 00:03:43.010 10:19:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:43.010 10:19:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:43.010 10:19:48 -- common/autotest_common.sh@10 -- # set +x 00:03:43.010 10:19:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:43.010 10:19:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.010 10:19:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.010 10:19:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:43.010 10:19:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.010 10:19:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:43.010 10:19:48 -- common/autotest_common.sh@1455 -- # uname 00:03:43.010 10:19:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:43.010 10:19:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:43.010 10:19:48 -- common/autotest_common.sh@1475 -- # uname 00:03:43.010 10:19:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:43.010 10:19:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:43.010 10:19:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:43.010 10:19:48 -- spdk/autotest.sh@72 -- # hash lcov 00:03:43.010 10:19:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:43.010 10:19:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:43.010 --rc lcov_branch_coverage=1 00:03:43.010 --rc lcov_function_coverage=1 00:03:43.010 --rc genhtml_branch_coverage=1 00:03:43.010 --rc genhtml_function_coverage=1 00:03:43.010 --rc genhtml_legend=1 00:03:43.010 --rc geninfo_all_blocks=1 00:03:43.010 ' 00:03:43.010 10:19:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:43.010 --rc lcov_branch_coverage=1 00:03:43.010 --rc lcov_function_coverage=1 00:03:43.010 --rc genhtml_branch_coverage=1 00:03:43.010 --rc genhtml_function_coverage=1 00:03:43.010 --rc genhtml_legend=1 00:03:43.010 --rc geninfo_all_blocks=1 00:03:43.010 ' 00:03:43.010 10:19:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:43.010 --rc lcov_branch_coverage=1 00:03:43.010 --rc lcov_function_coverage=1 00:03:43.010 --rc genhtml_branch_coverage=1 00:03:43.010 --rc genhtml_function_coverage=1 00:03:43.010 --rc genhtml_legend=1 00:03:43.010 --rc geninfo_all_blocks=1 00:03:43.010 --no-external' 00:03:43.010 10:19:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:43.010 --rc lcov_branch_coverage=1 00:03:43.010 --rc lcov_function_coverage=1 00:03:43.010 --rc genhtml_branch_coverage=1 00:03:43.010 --rc genhtml_function_coverage=1 00:03:43.010 --rc genhtml_legend=1 00:03:43.010 --rc geninfo_all_blocks=1 00:03:43.010 --no-external' 00:03:43.010 10:19:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:43.271 lcov: LCOV version 1.14 00:03:43.271 10:19:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:58.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:10.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:10.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:10.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:12.308 10:20:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:12.308 10:20:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.308 10:20:17 -- common/autotest_common.sh@10 -- # set +x 00:04:12.308 10:20:17 -- spdk/autotest.sh@91 -- # rm -f 00:04:12.308 10:20:17 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.509 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:16.509 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:16.509 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:16.510 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:16.510 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:16.510 10:20:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:16.510 10:20:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.510 10:20:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.510 10:20:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.510 10:20:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.510 10:20:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.510 10:20:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.510 10:20:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.510 10:20:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.510 10:20:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:16.510 10:20:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.510 10:20:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.510 10:20:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:16.510 10:20:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:16.510 10:20:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.510 No valid GPT data, bailing 00:04:16.510 10:20:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.510 10:20:22 -- scripts/common.sh@391 -- # pt= 00:04:16.510 10:20:22 -- scripts/common.sh@392 -- # return 1 00:04:16.510 10:20:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.510 1+0 records in 00:04:16.510 1+0 records out 00:04:16.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004933 s, 213 MB/s 00:04:16.510 10:20:22 -- spdk/autotest.sh@118 -- # sync 00:04:16.510 10:20:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.510 10:20:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.510 10:20:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.645 10:20:30 -- spdk/autotest.sh@124 -- # uname -s 00:04:24.645 10:20:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:24.645 10:20:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.645 10:20:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.645 10:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.645 10:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:24.645 ************************************ 00:04:24.645 START TEST setup.sh 00:04:24.645 ************************************ 00:04:24.645 10:20:30 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.645 * Looking for test storage... 00:04:24.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.645 10:20:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:24.645 10:20:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:24.645 10:20:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.645 10:20:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.645 10:20:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.645 10:20:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.645 ************************************ 00:04:24.645 START TEST acl 00:04:24.645 ************************************ 00:04:24.645 10:20:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.906 * Looking for test storage... 00:04:24.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.906 10:20:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:24.906 10:20:30 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:24.906 10:20:30 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.906 10:20:30 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.227 10:20:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:29.227 10:20:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:29.227 10:20:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.227 10:20:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:29.227 10:20:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.227 10:20:34 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.524 Hugepages 00:04:32.524 node hugesize free / total 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 00:04:32.524 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.524 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:32.783 10:20:38 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:32.783 10:20:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.783 10:20:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.783 10:20:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.783 ************************************ 00:04:32.783 START TEST denied 00:04:32.783 ************************************ 00:04:32.783 10:20:38 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:32.783 10:20:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:32.783 10:20:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:32.783 10:20:38 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:32.783 10:20:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.783 10:20:38 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.991 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.991 10:20:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.272 00:04:42.273 real 0m8.988s 00:04:42.273 user 0m3.015s 00:04:42.273 sys 0m5.323s 00:04:42.273 10:20:47 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.273 10:20:47 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:42.273 ************************************ 00:04:42.273 END TEST denied 00:04:42.273 ************************************ 00:04:42.273 10:20:47 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:42.273 10:20:47 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.273 10:20:47 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.273 10:20:47 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.273 10:20:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.273 ************************************ 00:04:42.273 START TEST allowed 00:04:42.273 ************************************ 00:04:42.273 10:20:47 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:42.273 10:20:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.273 10:20:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:42.273 10:20:47 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:42.273 10:20:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.273 10:20:47 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.558 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:47.558 10:20:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:47.558 10:20:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:47.558 10:20:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:47.558 10:20:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.558 10:20:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.762 00:04:51.762 real 0m9.943s 00:04:51.762 user 0m2.937s 00:04:51.762 sys 0m5.328s 00:04:51.762 10:20:57 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.762 10:20:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:51.762 ************************************ 00:04:51.762 END TEST allowed 00:04:51.762 ************************************ 00:04:51.762 10:20:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:51.762 00:04:51.762 real 0m27.125s 00:04:51.762 user 0m8.949s 00:04:51.762 sys 0m16.030s 00:04:51.762 10:20:57 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.762 10:20:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:51.762 ************************************ 00:04:51.762 END TEST acl 00:04:51.762 ************************************ 00:04:51.762 10:20:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:51.762 10:20:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:51.762 10:20:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.762 10:20:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.762 10:20:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.025 ************************************ 00:04:52.025 START TEST hugepages 00:04:52.025 ************************************ 00:04:52.025 10:20:57 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:52.025 * Looking for test storage... 00:04:52.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 106031440 kB' 'MemAvailable: 109375952 kB' 'Buffers: 4132 kB' 'Cached: 11602596 kB' 'SwapCached: 0 kB' 'Active: 8691856 kB' 'Inactive: 3526936 kB' 'Active(anon): 8201332 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615940 kB' 'Mapped: 224200 kB' 'Shmem: 7589268 kB' 'KReclaimable: 302768 kB' 'Slab: 1117408 kB' 'SReclaimable: 302768 kB' 'SUnreclaim: 814640 kB' 'KernelStack: 27632 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460880 kB' 'Committed_AS: 9648240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236904 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.026 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.027 10:20:57 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:52.027 10:20:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.027 10:20:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.027 10:20:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.027 ************************************ 00:04:52.027 START TEST default_setup 00:04:52.027 ************************************ 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.027 10:20:57 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.245 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:56.245 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108129148 kB' 'MemAvailable: 111473704 kB' 'Buffers: 4132 kB' 'Cached: 11602716 kB' 'SwapCached: 0 kB' 'Active: 8718196 kB' 'Inactive: 3526936 kB' 'Active(anon): 8227672 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641876 kB' 'Mapped: 224804 kB' 'Shmem: 7589388 kB' 'KReclaimable: 302856 kB' 'Slab: 1116632 kB' 'SReclaimable: 302856 kB' 'SUnreclaim: 813776 kB' 'KernelStack: 27744 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9679820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237228 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.245 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108137944 kB' 'MemAvailable: 111482500 kB' 'Buffers: 4132 kB' 'Cached: 11602720 kB' 'SwapCached: 0 kB' 'Active: 8713780 kB' 'Inactive: 3526936 kB' 'Active(anon): 8223256 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637636 kB' 'Mapped: 225308 kB' 'Shmem: 7589392 kB' 'KReclaimable: 302856 kB' 'Slab: 1116632 kB' 'SReclaimable: 302856 kB' 'SUnreclaim: 813776 kB' 'KernelStack: 27760 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9676592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.246 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.247 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108144120 kB' 'MemAvailable: 111488676 kB' 'Buffers: 4132 kB' 'Cached: 11602736 kB' 'SwapCached: 0 kB' 'Active: 8717940 kB' 'Inactive: 3526936 kB' 'Active(anon): 8227416 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641596 kB' 'Mapped: 224900 kB' 'Shmem: 7589408 kB' 'KReclaimable: 302856 kB' 'Slab: 1116020 kB' 'SReclaimable: 302856 kB' 'SUnreclaim: 813164 kB' 'KernelStack: 27728 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9682732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237196 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.248 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.249 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.250 nr_hugepages=1024 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.250 resv_hugepages=0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.250 surplus_hugepages=0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.250 anon_hugepages=0 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108142276 kB' 'MemAvailable: 111486832 kB' 'Buffers: 4132 kB' 'Cached: 11602760 kB' 'SwapCached: 0 kB' 'Active: 8713844 kB' 'Inactive: 3526936 kB' 'Active(anon): 8223320 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637496 kB' 'Mapped: 224900 kB' 'Shmem: 7589432 kB' 'KReclaimable: 302856 kB' 'Slab: 1115996 kB' 'SReclaimable: 302856 kB' 'SUnreclaim: 813140 kB' 'KernelStack: 27840 kB' 'PageTables: 9872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9678256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237224 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.250 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.251 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52247440 kB' 'MemUsed: 13411568 kB' 'SwapCached: 0 kB' 'Active: 6198920 kB' 'Inactive: 3423008 kB' 'Active(anon): 5884340 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376004 kB' 'Mapped: 132768 kB' 'AnonPages: 249304 kB' 'Shmem: 5638416 kB' 'KernelStack: 14136 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195320 kB' 'Slab: 628220 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 432900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.252 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.253 node0=1024 expecting 1024 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.253 00:04:56.253 real 0m4.063s 00:04:56.253 user 0m1.528s 00:04:56.253 sys 0m2.512s 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.253 10:21:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 ************************************ 00:04:56.253 END TEST default_setup 00:04:56.253 ************************************ 00:04:56.253 10:21:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:56.253 10:21:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:56.253 10:21:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.253 10:21:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.253 10:21:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 ************************************ 00:04:56.253 START TEST per_node_1G_alloc 00:04:56.253 ************************************ 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:56.253 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.254 10:21:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.467 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:00.467 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108136360 kB' 'MemAvailable: 111480908 kB' 'Buffers: 4132 kB' 'Cached: 11602896 kB' 'SwapCached: 0 kB' 'Active: 8716004 kB' 'Inactive: 3526936 kB' 'Active(anon): 8225480 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638696 kB' 'Mapped: 224280 kB' 'Shmem: 7589568 kB' 'KReclaimable: 302840 kB' 'Slab: 1115352 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812512 kB' 'KernelStack: 27664 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9665684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237164 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.468 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108138168 kB' 'MemAvailable: 111482716 kB' 'Buffers: 4132 kB' 'Cached: 11602900 kB' 'SwapCached: 0 kB' 'Active: 8716292 kB' 'Inactive: 3526936 kB' 'Active(anon): 8225768 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639032 kB' 'Mapped: 224280 kB' 'Shmem: 7589572 kB' 'KReclaimable: 302840 kB' 'Slab: 1115312 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812472 kB' 'KernelStack: 27680 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9665704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237148 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108138824 kB' 'MemAvailable: 111483372 kB' 'Buffers: 4132 kB' 'Cached: 11602916 kB' 'SwapCached: 0 kB' 'Active: 8715836 kB' 'Inactive: 3526936 kB' 'Active(anon): 8225312 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639012 kB' 'Mapped: 224204 kB' 'Shmem: 7589588 kB' 'KReclaimable: 302840 kB' 'Slab: 1115296 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812456 kB' 'KernelStack: 27680 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9665728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237148 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.472 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.473 nr_hugepages=1024 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.473 resv_hugepages=0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.473 surplus_hugepages=0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.473 anon_hugepages=0 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108138824 kB' 'MemAvailable: 111483372 kB' 'Buffers: 4132 kB' 'Cached: 11602960 kB' 'SwapCached: 0 kB' 'Active: 8715524 kB' 'Inactive: 3526936 kB' 'Active(anon): 8225000 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638628 kB' 'Mapped: 224204 kB' 'Shmem: 7589632 kB' 'KReclaimable: 302840 kB' 'Slab: 1115296 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812456 kB' 'KernelStack: 27664 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9665748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237148 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.473 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.474 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53317212 kB' 'MemUsed: 12341796 kB' 'SwapCached: 0 kB' 'Active: 6196612 kB' 'Inactive: 3423008 kB' 'Active(anon): 5882032 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376076 kB' 'Mapped: 132128 kB' 'AnonPages: 246704 kB' 'Shmem: 5638488 kB' 'KernelStack: 14024 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627536 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 432224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.475 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.476 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679848 kB' 'MemFree: 54825660 kB' 'MemUsed: 5854188 kB' 'SwapCached: 0 kB' 'Active: 2519304 kB' 'Inactive: 103928 kB' 'Active(anon): 2343360 kB' 'Inactive(anon): 0 kB' 'Active(file): 175944 kB' 'Inactive(file): 103928 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2231040 kB' 'Mapped: 92076 kB' 'AnonPages: 392312 kB' 'Shmem: 1951168 kB' 'KernelStack: 13656 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107528 kB' 'Slab: 487760 kB' 'SReclaimable: 107528 kB' 'SUnreclaim: 380232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.477 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.478 node0=512 expecting 512 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:00.478 node1=512 expecting 512 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.478 00:05:00.478 real 0m4.141s 00:05:00.478 user 0m1.607s 00:05:00.478 sys 0m2.610s 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.478 10:21:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.478 ************************************ 00:05:00.478 END TEST per_node_1G_alloc 00:05:00.478 ************************************ 00:05:00.478 10:21:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.478 10:21:06 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:00.478 10:21:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.478 10:21:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.478 10:21:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.478 ************************************ 00:05:00.478 START TEST even_2G_alloc 00:05:00.478 ************************************ 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.478 10:21:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.698 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:04.698 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108153508 kB' 'MemAvailable: 111498056 kB' 'Buffers: 4132 kB' 'Cached: 11603076 kB' 'SwapCached: 0 kB' 'Active: 8721728 kB' 'Inactive: 3526936 kB' 'Active(anon): 8231204 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644200 kB' 'Mapped: 224156 kB' 'Shmem: 7589748 kB' 'KReclaimable: 302840 kB' 'Slab: 1114988 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812148 kB' 'KernelStack: 27632 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9671848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237244 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.698 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108150984 kB' 'MemAvailable: 111495532 kB' 'Buffers: 4132 kB' 'Cached: 11603080 kB' 'SwapCached: 0 kB' 'Active: 8719596 kB' 'Inactive: 3526936 kB' 'Active(anon): 8229072 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642276 kB' 'Mapped: 224376 kB' 'Shmem: 7589752 kB' 'KReclaimable: 302840 kB' 'Slab: 1115008 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812168 kB' 'KernelStack: 27744 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9668344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237212 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.699 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.700 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108157084 kB' 'MemAvailable: 111501632 kB' 'Buffers: 4132 kB' 'Cached: 11603096 kB' 'SwapCached: 0 kB' 'Active: 8713312 kB' 'Inactive: 3526936 kB' 'Active(anon): 8222788 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636456 kB' 'Mapped: 223660 kB' 'Shmem: 7589768 kB' 'KReclaimable: 302840 kB' 'Slab: 1115028 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812188 kB' 'KernelStack: 27632 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9662336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.701 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.702 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.703 nr_hugepages=1024 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.703 resv_hugepages=0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.703 surplus_hugepages=0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.703 anon_hugepages=0 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108155344 kB' 'MemAvailable: 111499892 kB' 'Buffers: 4132 kB' 'Cached: 11603124 kB' 'SwapCached: 0 kB' 'Active: 8717136 kB' 'Inactive: 3526936 kB' 'Active(anon): 8226612 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640792 kB' 'Mapped: 223808 kB' 'Shmem: 7589796 kB' 'KReclaimable: 302840 kB' 'Slab: 1114996 kB' 'SReclaimable: 302840 kB' 'SUnreclaim: 812156 kB' 'KernelStack: 27680 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9667520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.703 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.704 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53327276 kB' 'MemUsed: 12331732 kB' 'SwapCached: 0 kB' 'Active: 6192616 kB' 'Inactive: 3423008 kB' 'Active(anon): 5878036 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376156 kB' 'Mapped: 132128 kB' 'AnonPages: 242744 kB' 'Shmem: 5638568 kB' 'KernelStack: 14056 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627056 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 431744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.705 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679848 kB' 'MemFree: 54828532 kB' 'MemUsed: 5851316 kB' 'SwapCached: 0 kB' 'Active: 2520940 kB' 'Inactive: 103928 kB' 'Active(anon): 2344996 kB' 'Inactive(anon): 0 kB' 'Active(file): 175944 kB' 'Inactive(file): 103928 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2231120 kB' 'Mapped: 91696 kB' 'AnonPages: 394308 kB' 'Shmem: 1951248 kB' 'KernelStack: 13640 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107528 kB' 'Slab: 487940 kB' 'SReclaimable: 107528 kB' 'SUnreclaim: 380412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.706 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.707 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.708 node0=512 expecting 512 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.708 node1=512 expecting 512 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.708 00:05:04.708 real 0m4.108s 00:05:04.708 user 0m1.676s 00:05:04.708 sys 0m2.503s 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.708 10:21:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.708 ************************************ 00:05:04.708 END TEST even_2G_alloc 00:05:04.708 ************************************ 00:05:04.708 10:21:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:04.708 10:21:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:04.708 10:21:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.708 10:21:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.708 10:21:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.708 ************************************ 00:05:04.708 START TEST odd_alloc 00:05:04.708 ************************************ 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.708 10:21:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.920 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:08.920 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:08.921 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108155248 kB' 'MemAvailable: 111499780 kB' 'Buffers: 4132 kB' 'Cached: 11603272 kB' 'SwapCached: 0 kB' 'Active: 8712412 kB' 'Inactive: 3526936 kB' 'Active(anon): 8221888 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634744 kB' 'Mapped: 223416 kB' 'Shmem: 7589944 kB' 'KReclaimable: 302808 kB' 'Slab: 1115408 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812600 kB' 'KernelStack: 27760 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 9661608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237240 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.921 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.922 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108164964 kB' 'MemAvailable: 111509496 kB' 'Buffers: 4132 kB' 'Cached: 11603276 kB' 'SwapCached: 0 kB' 'Active: 8712184 kB' 'Inactive: 3526936 kB' 'Active(anon): 8221660 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634580 kB' 'Mapped: 223416 kB' 'Shmem: 7589948 kB' 'KReclaimable: 302808 kB' 'Slab: 1115292 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812484 kB' 'KernelStack: 27776 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 9661628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237256 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.923 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.924 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108165920 kB' 'MemAvailable: 111510452 kB' 'Buffers: 4132 kB' 'Cached: 11603276 kB' 'SwapCached: 0 kB' 'Active: 8711916 kB' 'Inactive: 3526936 kB' 'Active(anon): 8221392 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634248 kB' 'Mapped: 223392 kB' 'Shmem: 7589948 kB' 'KReclaimable: 302808 kB' 'Slab: 1115388 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812580 kB' 'KernelStack: 27888 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 9661648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237288 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.925 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.927 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:08.928 nr_hugepages=1025 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.928 resv_hugepages=0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.928 surplus_hugepages=0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.928 anon_hugepages=0 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108163492 kB' 'MemAvailable: 111508024 kB' 'Buffers: 4132 kB' 'Cached: 11603308 kB' 'SwapCached: 0 kB' 'Active: 8711508 kB' 'Inactive: 3526936 kB' 'Active(anon): 8220984 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634232 kB' 'Mapped: 223316 kB' 'Shmem: 7589980 kB' 'KReclaimable: 302808 kB' 'Slab: 1115396 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812588 kB' 'KernelStack: 28000 kB' 'PageTables: 9996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 9661668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237256 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.928 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53332052 kB' 'MemUsed: 12326956 kB' 'SwapCached: 0 kB' 'Active: 6191436 kB' 'Inactive: 3423008 kB' 'Active(anon): 5876856 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376264 kB' 'Mapped: 132108 kB' 'AnonPages: 241284 kB' 'Shmem: 5638676 kB' 'KernelStack: 14040 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627760 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 432448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.931 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679848 kB' 'MemFree: 54834328 kB' 'MemUsed: 5845520 kB' 'SwapCached: 0 kB' 'Active: 2519576 kB' 'Inactive: 103928 kB' 'Active(anon): 2343632 kB' 'Inactive(anon): 0 kB' 'Active(file): 175944 kB' 'Inactive(file): 103928 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2231224 kB' 'Mapped: 91208 kB' 'AnonPages: 392452 kB' 'Shmem: 1951352 kB' 'KernelStack: 13624 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107496 kB' 'Slab: 487604 kB' 'SReclaimable: 107496 kB' 'SUnreclaim: 380108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.932 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.933 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:08.934 node0=512 expecting 513 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:08.934 node1=513 expecting 512 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:08.934 00:05:08.934 real 0m4.020s 00:05:08.934 user 0m1.559s 00:05:08.934 sys 0m2.519s 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.934 10:21:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.934 ************************************ 00:05:08.934 END TEST odd_alloc 00:05:08.934 ************************************ 00:05:08.934 10:21:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:08.934 10:21:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:08.934 10:21:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.934 10:21:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.934 10:21:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.934 ************************************ 00:05:08.934 START TEST custom_alloc 00:05:08.934 ************************************ 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.934 10:21:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.240 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:12.240 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.240 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 107151492 kB' 'MemAvailable: 110496024 kB' 'Buffers: 4132 kB' 'Cached: 11603444 kB' 'SwapCached: 0 kB' 'Active: 8712184 kB' 'Inactive: 3526936 kB' 'Active(anon): 8221660 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634280 kB' 'Mapped: 223432 kB' 'Shmem: 7590116 kB' 'KReclaimable: 302808 kB' 'Slab: 1114736 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 811928 kB' 'KernelStack: 27568 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 9659552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237080 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.242 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 107151988 kB' 'MemAvailable: 110496520 kB' 'Buffers: 4132 kB' 'Cached: 11603448 kB' 'SwapCached: 0 kB' 'Active: 8711364 kB' 'Inactive: 3526936 kB' 'Active(anon): 8220840 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633956 kB' 'Mapped: 223340 kB' 'Shmem: 7590120 kB' 'KReclaimable: 302808 kB' 'Slab: 1114732 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 811924 kB' 'KernelStack: 27584 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 9659572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237032 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 107152364 kB' 'MemAvailable: 110496896 kB' 'Buffers: 4132 kB' 'Cached: 11603464 kB' 'SwapCached: 0 kB' 'Active: 8711380 kB' 'Inactive: 3526936 kB' 'Active(anon): 8220856 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633952 kB' 'Mapped: 223340 kB' 'Shmem: 7590136 kB' 'KReclaimable: 302808 kB' 'Slab: 1114732 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 811924 kB' 'KernelStack: 27584 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 9659592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237032 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:12.602 nr_hugepages=1536 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.602 resv_hugepages=0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.602 surplus_hugepages=0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.602 anon_hugepages=0 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 107152936 kB' 'MemAvailable: 110497468 kB' 'Buffers: 4132 kB' 'Cached: 11603504 kB' 'SwapCached: 0 kB' 'Active: 8711216 kB' 'Inactive: 3526936 kB' 'Active(anon): 8220692 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633752 kB' 'Mapped: 223340 kB' 'Shmem: 7590176 kB' 'KReclaimable: 302808 kB' 'Slab: 1114732 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 811924 kB' 'KernelStack: 27584 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 9659612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237032 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.602 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.603 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53360000 kB' 'MemUsed: 12299008 kB' 'SwapCached: 0 kB' 'Active: 6189480 kB' 'Inactive: 3423008 kB' 'Active(anon): 5874900 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376268 kB' 'Mapped: 132108 kB' 'AnonPages: 239352 kB' 'Shmem: 5638680 kB' 'KernelStack: 13928 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627332 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 432020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.604 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679848 kB' 'MemFree: 53792848 kB' 'MemUsed: 6887000 kB' 'SwapCached: 0 kB' 'Active: 2521932 kB' 'Inactive: 103928 kB' 'Active(anon): 2345988 kB' 'Inactive(anon): 0 kB' 'Active(file): 175944 kB' 'Inactive(file): 103928 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2231368 kB' 'Mapped: 91232 kB' 'AnonPages: 394568 kB' 'Shmem: 1951496 kB' 'KernelStack: 13624 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107496 kB' 'Slab: 487400 kB' 'SReclaimable: 107496 kB' 'SUnreclaim: 379904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.605 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.606 node0=512 expecting 512 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.606 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:12.606 node1=1024 expecting 1024 00:05:12.607 10:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:12.607 00:05:12.607 real 0m3.720s 00:05:12.607 user 0m1.441s 00:05:12.607 sys 0m2.265s 00:05:12.607 10:21:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.607 10:21:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.607 ************************************ 00:05:12.607 END TEST custom_alloc 00:05:12.607 ************************************ 00:05:12.607 10:21:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.607 10:21:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:12.607 10:21:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.607 10:21:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.607 10:21:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.607 ************************************ 00:05:12.607 START TEST no_shrink_alloc 00:05:12.607 ************************************ 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.607 10:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.810 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:16.810 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108210204 kB' 'MemAvailable: 111554736 kB' 'Buffers: 4132 kB' 'Cached: 11603616 kB' 'SwapCached: 0 kB' 'Active: 8718036 kB' 'Inactive: 3526936 kB' 'Active(anon): 8227512 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641020 kB' 'Mapped: 223876 kB' 'Shmem: 7590288 kB' 'KReclaimable: 302808 kB' 'Slab: 1115068 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812260 kB' 'KernelStack: 27616 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9666860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237068 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.810 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.811 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108213100 kB' 'MemAvailable: 111557632 kB' 'Buffers: 4132 kB' 'Cached: 11603620 kB' 'SwapCached: 0 kB' 'Active: 8712964 kB' 'Inactive: 3526936 kB' 'Active(anon): 8222440 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635496 kB' 'Mapped: 223372 kB' 'Shmem: 7590292 kB' 'KReclaimable: 302808 kB' 'Slab: 1115068 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812260 kB' 'KernelStack: 27584 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9660760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237048 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.812 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.813 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108212852 kB' 'MemAvailable: 111557384 kB' 'Buffers: 4132 kB' 'Cached: 11603624 kB' 'SwapCached: 0 kB' 'Active: 8712580 kB' 'Inactive: 3526936 kB' 'Active(anon): 8222056 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635064 kB' 'Mapped: 223356 kB' 'Shmem: 7590296 kB' 'KReclaimable: 302808 kB' 'Slab: 1115076 kB' 'SReclaimable: 302808 kB' 'SUnreclaim: 812268 kB' 'KernelStack: 27584 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9660780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237048 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.814 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.816 nr_hugepages=1024 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.816 resv_hugepages=0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.816 surplus_hugepages=0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.816 anon_hugepages=0 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108214712 kB' 'MemAvailable: 111559236 kB' 'Buffers: 4132 kB' 'Cached: 11603664 kB' 'SwapCached: 0 kB' 'Active: 8712608 kB' 'Inactive: 3526936 kB' 'Active(anon): 8222084 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635056 kB' 'Mapped: 223356 kB' 'Shmem: 7590336 kB' 'KReclaimable: 302792 kB' 'Slab: 1115124 kB' 'SReclaimable: 302792 kB' 'SUnreclaim: 812332 kB' 'KernelStack: 27600 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9660804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237048 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.816 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52311284 kB' 'MemUsed: 13347724 kB' 'SwapCached: 0 kB' 'Active: 6189656 kB' 'Inactive: 3423008 kB' 'Active(anon): 5875076 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376380 kB' 'Mapped: 132108 kB' 'AnonPages: 239492 kB' 'Shmem: 5638792 kB' 'KernelStack: 13944 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627208 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 431896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.819 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.820 node0=1024 expecting 1024 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.820 10:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.029 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:21.029 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:21.029 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108244444 kB' 'MemAvailable: 111588968 kB' 'Buffers: 4132 kB' 'Cached: 11603772 kB' 'SwapCached: 0 kB' 'Active: 8714836 kB' 'Inactive: 3526936 kB' 'Active(anon): 8224312 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636656 kB' 'Mapped: 223508 kB' 'Shmem: 7590444 kB' 'KReclaimable: 302792 kB' 'Slab: 1115572 kB' 'SReclaimable: 302792 kB' 'SUnreclaim: 812780 kB' 'KernelStack: 27664 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9664948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237208 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.029 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.030 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108246948 kB' 'MemAvailable: 111591472 kB' 'Buffers: 4132 kB' 'Cached: 11603772 kB' 'SwapCached: 0 kB' 'Active: 8715080 kB' 'Inactive: 3526936 kB' 'Active(anon): 8224556 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636948 kB' 'Mapped: 223508 kB' 'Shmem: 7590444 kB' 'KReclaimable: 302792 kB' 'Slab: 1115632 kB' 'SReclaimable: 302792 kB' 'SUnreclaim: 812840 kB' 'KernelStack: 27808 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9684712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.031 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108246460 kB' 'MemAvailable: 111590984 kB' 'Buffers: 4132 kB' 'Cached: 11603796 kB' 'SwapCached: 0 kB' 'Active: 8714032 kB' 'Inactive: 3526936 kB' 'Active(anon): 8223508 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636292 kB' 'Mapped: 223432 kB' 'Shmem: 7590468 kB' 'KReclaimable: 302792 kB' 'Slab: 1115632 kB' 'SReclaimable: 302792 kB' 'SUnreclaim: 812840 kB' 'KernelStack: 27776 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9664620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.032 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.033 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.034 nr_hugepages=1024 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.034 resv_hugepages=0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.034 surplus_hugepages=0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.034 anon_hugepages=0 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338856 kB' 'MemFree: 108245204 kB' 'MemAvailable: 111589728 kB' 'Buffers: 4132 kB' 'Cached: 11603796 kB' 'SwapCached: 0 kB' 'Active: 8714884 kB' 'Inactive: 3526936 kB' 'Active(anon): 8224360 kB' 'Inactive(anon): 0 kB' 'Active(file): 490524 kB' 'Inactive(file): 3526936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637140 kB' 'Mapped: 223432 kB' 'Shmem: 7590468 kB' 'KReclaimable: 302792 kB' 'Slab: 1115632 kB' 'SReclaimable: 302792 kB' 'SUnreclaim: 812840 kB' 'KernelStack: 27840 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 9664768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110016 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3888500 kB' 'DirectMap2M: 32491520 kB' 'DirectMap1G: 99614720 kB' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.034 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.035 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52327800 kB' 'MemUsed: 13331208 kB' 'SwapCached: 0 kB' 'Active: 6189924 kB' 'Inactive: 3423008 kB' 'Active(anon): 5875344 kB' 'Inactive(anon): 0 kB' 'Active(file): 314580 kB' 'Inactive(file): 3423008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9376448 kB' 'Mapped: 132168 kB' 'AnonPages: 239588 kB' 'Shmem: 5638860 kB' 'KernelStack: 14216 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195312 kB' 'Slab: 627396 kB' 'SReclaimable: 195312 kB' 'SUnreclaim: 432084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.036 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.037 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.038 node0=1024 expecting 1024 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.038 00:05:21.038 real 0m7.989s 00:05:21.038 user 0m3.158s 00:05:21.038 sys 0m4.963s 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.038 10:21:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.038 ************************************ 00:05:21.038 END TEST no_shrink_alloc 00:05:21.038 ************************************ 00:05:21.038 10:21:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:21.038 10:21:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:21.038 00:05:21.038 real 0m28.662s 00:05:21.038 user 0m11.217s 00:05:21.038 sys 0m17.778s 00:05:21.038 10:21:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.038 10:21:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.038 ************************************ 00:05:21.038 END TEST hugepages 00:05:21.038 ************************************ 00:05:21.038 10:21:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:21.038 10:21:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:21.038 10:21:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.038 10:21:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.038 10:21:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.038 ************************************ 00:05:21.038 START TEST driver 00:05:21.038 ************************************ 00:05:21.038 10:21:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:21.038 * Looking for test storage... 00:05:21.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:21.038 10:21:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:21.038 10:21:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.038 10:21:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.326 10:21:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:26.326 10:21:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.326 10:21:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.326 10:21:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:26.326 ************************************ 00:05:26.326 START TEST guess_driver 00:05:26.326 ************************************ 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:26.326 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:26.326 Looking for driver=vfio-pci 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.326 10:21:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.626 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.886 10:21:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:35.169 00:05:35.169 real 0m9.215s 00:05:35.169 user 0m3.038s 00:05:35.169 sys 0m5.419s 00:05:35.169 10:21:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.169 10:21:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:35.169 ************************************ 00:05:35.169 END TEST guess_driver 00:05:35.169 ************************************ 00:05:35.169 10:21:40 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:35.169 00:05:35.169 real 0m14.494s 00:05:35.169 user 0m4.649s 00:05:35.169 sys 0m8.364s 00:05:35.169 10:21:40 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.169 10:21:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:35.169 ************************************ 00:05:35.169 END TEST driver 00:05:35.169 ************************************ 00:05:35.169 10:21:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:35.169 10:21:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:35.169 10:21:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.169 10:21:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.169 10:21:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.169 ************************************ 00:05:35.169 START TEST devices 00:05:35.169 ************************************ 00:05:35.169 10:21:40 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:35.430 * Looking for test storage... 00:05:35.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:35.430 10:21:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:35.430 10:21:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:35.430 10:21:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.430 10:21:40 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:39.655 10:21:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:39.655 No valid GPT data, bailing 00:05:39.655 10:21:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.655 10:21:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:39.655 10:21:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.655 10:21:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.655 ************************************ 00:05:39.655 START TEST nvme_mount 00:05:39.655 ************************************ 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.655 10:21:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:40.594 Creating new GPT entries in memory. 00:05:40.594 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.594 other utilities. 00:05:40.594 10:21:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.594 10:21:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.594 10:21:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.594 10:21:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.594 10:21:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.593 Creating new GPT entries in memory. 00:05:41.593 The operation has completed successfully. 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1710654 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:41.593 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.853 10:21:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:45.149 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:45.410 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.410 10:21:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.673 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:45.673 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:45.673 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:45.673 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.673 10:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:49.901 10:21:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:49.901 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.902 10:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:53.204 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:53.465 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.465 00:05:53.465 real 0m13.715s 00:05:53.465 user 0m4.297s 00:05:53.465 sys 0m7.273s 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.465 10:21:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 ************************************ 00:05:53.465 END TEST nvme_mount 00:05:53.465 ************************************ 00:05:53.465 10:21:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:53.465 10:21:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:53.465 10:21:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.465 10:21:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.465 10:21:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 ************************************ 00:05:53.465 START TEST dm_mount 00:05:53.465 ************************************ 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:53.465 10:21:59 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:54.407 Creating new GPT entries in memory. 00:05:54.407 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:54.407 other utilities. 00:05:54.407 10:22:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:54.407 10:22:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:54.407 10:22:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:54.407 10:22:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:54.407 10:22:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:55.348 Creating new GPT entries in memory. 00:05:55.348 The operation has completed successfully. 00:05:55.608 10:22:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:55.608 10:22:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.608 10:22:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:55.608 10:22:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:55.608 10:22:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:56.549 The operation has completed successfully. 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1716203 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.549 10:22:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.858 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.859 10:22:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.157 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:04.158 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:04.158 00:06:04.158 real 0m10.769s 00:06:04.158 user 0m2.914s 00:06:04.158 sys 0m4.930s 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.158 10:22:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:04.158 ************************************ 00:06:04.158 END TEST dm_mount 00:06:04.158 ************************************ 00:06:04.158 10:22:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:04.158 10:22:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:04.417 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:04.417 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:04.417 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:04.417 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:04.417 10:22:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:04.417 00:06:04.417 real 0m29.275s 00:06:04.417 user 0m8.850s 00:06:04.417 sys 0m15.234s 00:06:04.417 10:22:10 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.417 10:22:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:04.417 ************************************ 00:06:04.417 END TEST devices 00:06:04.417 ************************************ 00:06:04.677 10:22:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:04.677 00:06:04.677 real 1m39.976s 00:06:04.677 user 0m33.839s 00:06:04.677 sys 0m57.677s 00:06:04.677 10:22:10 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.677 10:22:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:04.677 ************************************ 00:06:04.677 END TEST setup.sh 00:06:04.677 ************************************ 00:06:04.677 10:22:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.677 10:22:10 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:08.874 Hugepages 00:06:08.874 node hugesize free / total 00:06:08.874 node0 1048576kB 0 / 0 00:06:08.874 node0 2048kB 2048 / 2048 00:06:08.874 node1 1048576kB 0 / 0 00:06:08.874 node1 2048kB 0 / 0 00:06:08.874 00:06:08.874 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:08.874 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:08.874 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:08.874 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:08.874 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:08.874 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:08.874 10:22:14 -- spdk/autotest.sh@130 -- # uname -s 00:06:08.874 10:22:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:08.874 10:22:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:08.874 10:22:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:12.198 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:12.198 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:12.199 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:12.199 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:12.199 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:14.115 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:14.116 10:22:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:15.057 10:22:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:15.057 10:22:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:15.057 10:22:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:15.057 10:22:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:15.057 10:22:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:15.057 10:22:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:15.057 10:22:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.058 10:22:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:15.058 10:22:20 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:15.058 10:22:20 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:15.058 10:22:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:15.058 10:22:20 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:19.256 Waiting for block devices as requested 00:06:19.256 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:19.256 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:19.256 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:19.517 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:19.517 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:19.517 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:19.778 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:19.778 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:19.778 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:19.778 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:19.778 10:22:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:19.778 10:22:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:19.778 10:22:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:19.778 10:22:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:19.778 10:22:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:19.778 10:22:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:19.778 10:22:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:19.778 10:22:25 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:19.778 10:22:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:19.778 10:22:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:20.039 10:22:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:20.039 10:22:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:20.039 10:22:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:20.039 10:22:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:20.039 10:22:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:20.039 10:22:25 -- common/autotest_common.sh@1557 -- # continue 00:06:20.039 10:22:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:20.039 10:22:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.039 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.039 10:22:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:20.039 10:22:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.039 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.039 10:22:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:24.253 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:24.253 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:24.253 10:22:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:24.253 10:22:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.253 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.253 10:22:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:24.253 10:22:29 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:24.253 10:22:29 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:24.253 10:22:29 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:24.253 10:22:29 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:24.253 10:22:29 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:24.253 10:22:29 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:24.253 10:22:29 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:24.253 10:22:29 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:24.253 10:22:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:24.253 10:22:29 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:24.253 10:22:29 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:24.253 10:22:29 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:24.253 10:22:29 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:24.253 10:22:29 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:24.253 10:22:29 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:24.253 10:22:29 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:24.253 10:22:29 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:24.253 10:22:29 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:24.253 10:22:29 -- common/autotest_common.sh@1593 -- # return 0 00:06:24.253 10:22:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:24.253 10:22:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:24.253 10:22:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:24.253 10:22:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:24.253 10:22:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:24.253 10:22:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.253 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.253 10:22:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:24.253 10:22:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:24.253 10:22:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.253 10:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.253 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.253 ************************************ 00:06:24.253 START TEST env 00:06:24.253 ************************************ 00:06:24.253 10:22:29 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:24.253 * Looking for test storage... 00:06:24.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:24.253 10:22:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:24.253 10:22:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.253 10:22:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.253 10:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.253 ************************************ 00:06:24.253 START TEST env_memory 00:06:24.254 ************************************ 00:06:24.254 10:22:29 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:24.254 00:06:24.254 00:06:24.254 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.254 http://cunit.sourceforge.net/ 00:06:24.254 00:06:24.254 00:06:24.254 Suite: memory 00:06:24.254 Test: alloc and free memory map ...[2024-07-22 10:22:29.731838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:24.254 passed 00:06:24.254 Test: mem map translation ...[2024-07-22 10:22:29.757569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:24.254 [2024-07-22 10:22:29.757605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:24.254 [2024-07-22 10:22:29.757652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:24.254 [2024-07-22 10:22:29.757660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:24.254 passed 00:06:24.254 Test: mem map registration ...[2024-07-22 10:22:29.813065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:24.254 [2024-07-22 10:22:29.813088] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:24.254 passed 00:06:24.254 Test: mem map adjacent registrations ...passed 00:06:24.254 00:06:24.254 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.254 suites 1 1 n/a 0 0 00:06:24.254 tests 4 4 4 0 0 00:06:24.254 asserts 152 152 152 0 n/a 00:06:24.254 00:06:24.254 Elapsed time = 0.194 seconds 00:06:24.254 00:06:24.254 real 0m0.209s 00:06:24.254 user 0m0.196s 00:06:24.254 sys 0m0.012s 00:06:24.254 10:22:29 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.254 10:22:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:24.254 ************************************ 00:06:24.254 END TEST env_memory 00:06:24.254 ************************************ 00:06:24.254 10:22:29 env -- common/autotest_common.sh@1142 -- # return 0 00:06:24.254 10:22:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:24.254 10:22:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.254 10:22:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.254 10:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.515 ************************************ 00:06:24.515 START TEST env_vtophys 00:06:24.515 ************************************ 00:06:24.515 10:22:29 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:24.515 EAL: lib.eal log level changed from notice to debug 00:06:24.515 EAL: Detected lcore 0 as core 0 on socket 0 00:06:24.515 EAL: Detected lcore 1 as core 1 on socket 0 00:06:24.515 EAL: Detected lcore 2 as core 2 on socket 0 00:06:24.515 EAL: Detected lcore 3 as core 3 on socket 0 00:06:24.515 EAL: Detected lcore 4 as core 4 on socket 0 00:06:24.515 EAL: Detected lcore 5 as core 5 on socket 0 00:06:24.515 EAL: Detected lcore 6 as core 6 on socket 0 00:06:24.515 EAL: Detected lcore 7 as core 7 on socket 0 00:06:24.515 EAL: Detected lcore 8 as core 8 on socket 0 00:06:24.515 EAL: Detected lcore 9 as core 9 on socket 0 00:06:24.515 EAL: Detected lcore 10 as core 10 on socket 0 00:06:24.515 EAL: Detected lcore 11 as core 11 on socket 0 00:06:24.515 EAL: Detected lcore 12 as core 12 on socket 0 00:06:24.515 EAL: Detected lcore 13 as core 13 on socket 0 00:06:24.515 EAL: Detected lcore 14 as core 14 on socket 0 00:06:24.515 EAL: Detected lcore 15 as core 15 on socket 0 00:06:24.515 EAL: Detected lcore 16 as core 16 on socket 0 00:06:24.515 EAL: Detected lcore 17 as core 17 on socket 0 00:06:24.515 EAL: Detected lcore 18 as core 18 on socket 0 00:06:24.515 EAL: Detected lcore 19 as core 19 on socket 0 00:06:24.515 EAL: Detected lcore 20 as core 20 on socket 0 00:06:24.515 EAL: Detected lcore 21 as core 21 on socket 0 00:06:24.515 EAL: Detected lcore 22 as core 22 on socket 0 00:06:24.515 EAL: Detected lcore 23 as core 23 on socket 0 00:06:24.515 EAL: Detected lcore 24 as core 24 on socket 0 00:06:24.515 EAL: Detected lcore 25 as core 25 on socket 0 00:06:24.515 EAL: Detected lcore 26 as core 26 on socket 0 00:06:24.515 EAL: Detected lcore 27 as core 27 on socket 0 00:06:24.515 EAL: Detected lcore 28 as core 28 on socket 0 00:06:24.515 EAL: Detected lcore 29 as core 29 on socket 0 00:06:24.515 EAL: Detected lcore 30 as core 30 on socket 0 00:06:24.515 EAL: Detected lcore 31 as core 31 on socket 0 00:06:24.515 EAL: Detected lcore 32 as core 32 on socket 0 00:06:24.515 EAL: Detected lcore 33 as core 33 on socket 0 00:06:24.515 EAL: Detected lcore 34 as core 34 on socket 0 00:06:24.515 EAL: Detected lcore 35 as core 35 on socket 0 00:06:24.515 EAL: Detected lcore 36 as core 0 on socket 1 00:06:24.515 EAL: Detected lcore 37 as core 1 on socket 1 00:06:24.515 EAL: Detected lcore 38 as core 2 on socket 1 00:06:24.515 EAL: Detected lcore 39 as core 3 on socket 1 00:06:24.515 EAL: Detected lcore 40 as core 4 on socket 1 00:06:24.515 EAL: Detected lcore 41 as core 5 on socket 1 00:06:24.515 EAL: Detected lcore 42 as core 6 on socket 1 00:06:24.515 EAL: Detected lcore 43 as core 7 on socket 1 00:06:24.515 EAL: Detected lcore 44 as core 8 on socket 1 00:06:24.515 EAL: Detected lcore 45 as core 9 on socket 1 00:06:24.515 EAL: Detected lcore 46 as core 10 on socket 1 00:06:24.515 EAL: Detected lcore 47 as core 11 on socket 1 00:06:24.515 EAL: Detected lcore 48 as core 12 on socket 1 00:06:24.515 EAL: Detected lcore 49 as core 13 on socket 1 00:06:24.515 EAL: Detected lcore 50 as core 14 on socket 1 00:06:24.515 EAL: Detected lcore 51 as core 15 on socket 1 00:06:24.515 EAL: Detected lcore 52 as core 16 on socket 1 00:06:24.515 EAL: Detected lcore 53 as core 17 on socket 1 00:06:24.515 EAL: Detected lcore 54 as core 18 on socket 1 00:06:24.515 EAL: Detected lcore 55 as core 19 on socket 1 00:06:24.515 EAL: Detected lcore 56 as core 20 on socket 1 00:06:24.515 EAL: Detected lcore 57 as core 21 on socket 1 00:06:24.515 EAL: Detected lcore 58 as core 22 on socket 1 00:06:24.515 EAL: Detected lcore 59 as core 23 on socket 1 00:06:24.515 EAL: Detected lcore 60 as core 24 on socket 1 00:06:24.515 EAL: Detected lcore 61 as core 25 on socket 1 00:06:24.515 EAL: Detected lcore 62 as core 26 on socket 1 00:06:24.515 EAL: Detected lcore 63 as core 27 on socket 1 00:06:24.515 EAL: Detected lcore 64 as core 28 on socket 1 00:06:24.515 EAL: Detected lcore 65 as core 29 on socket 1 00:06:24.515 EAL: Detected lcore 66 as core 30 on socket 1 00:06:24.515 EAL: Detected lcore 67 as core 31 on socket 1 00:06:24.515 EAL: Detected lcore 68 as core 32 on socket 1 00:06:24.515 EAL: Detected lcore 69 as core 33 on socket 1 00:06:24.515 EAL: Detected lcore 70 as core 34 on socket 1 00:06:24.515 EAL: Detected lcore 71 as core 35 on socket 1 00:06:24.515 EAL: Detected lcore 72 as core 0 on socket 0 00:06:24.515 EAL: Detected lcore 73 as core 1 on socket 0 00:06:24.515 EAL: Detected lcore 74 as core 2 on socket 0 00:06:24.515 EAL: Detected lcore 75 as core 3 on socket 0 00:06:24.515 EAL: Detected lcore 76 as core 4 on socket 0 00:06:24.515 EAL: Detected lcore 77 as core 5 on socket 0 00:06:24.515 EAL: Detected lcore 78 as core 6 on socket 0 00:06:24.515 EAL: Detected lcore 79 as core 7 on socket 0 00:06:24.515 EAL: Detected lcore 80 as core 8 on socket 0 00:06:24.515 EAL: Detected lcore 81 as core 9 on socket 0 00:06:24.515 EAL: Detected lcore 82 as core 10 on socket 0 00:06:24.515 EAL: Detected lcore 83 as core 11 on socket 0 00:06:24.515 EAL: Detected lcore 84 as core 12 on socket 0 00:06:24.515 EAL: Detected lcore 85 as core 13 on socket 0 00:06:24.515 EAL: Detected lcore 86 as core 14 on socket 0 00:06:24.515 EAL: Detected lcore 87 as core 15 on socket 0 00:06:24.515 EAL: Detected lcore 88 as core 16 on socket 0 00:06:24.515 EAL: Detected lcore 89 as core 17 on socket 0 00:06:24.515 EAL: Detected lcore 90 as core 18 on socket 0 00:06:24.515 EAL: Detected lcore 91 as core 19 on socket 0 00:06:24.515 EAL: Detected lcore 92 as core 20 on socket 0 00:06:24.515 EAL: Detected lcore 93 as core 21 on socket 0 00:06:24.515 EAL: Detected lcore 94 as core 22 on socket 0 00:06:24.515 EAL: Detected lcore 95 as core 23 on socket 0 00:06:24.515 EAL: Detected lcore 96 as core 24 on socket 0 00:06:24.515 EAL: Detected lcore 97 as core 25 on socket 0 00:06:24.515 EAL: Detected lcore 98 as core 26 on socket 0 00:06:24.515 EAL: Detected lcore 99 as core 27 on socket 0 00:06:24.515 EAL: Detected lcore 100 as core 28 on socket 0 00:06:24.515 EAL: Detected lcore 101 as core 29 on socket 0 00:06:24.515 EAL: Detected lcore 102 as core 30 on socket 0 00:06:24.515 EAL: Detected lcore 103 as core 31 on socket 0 00:06:24.515 EAL: Detected lcore 104 as core 32 on socket 0 00:06:24.515 EAL: Detected lcore 105 as core 33 on socket 0 00:06:24.515 EAL: Detected lcore 106 as core 34 on socket 0 00:06:24.515 EAL: Detected lcore 107 as core 35 on socket 0 00:06:24.515 EAL: Detected lcore 108 as core 0 on socket 1 00:06:24.515 EAL: Detected lcore 109 as core 1 on socket 1 00:06:24.515 EAL: Detected lcore 110 as core 2 on socket 1 00:06:24.515 EAL: Detected lcore 111 as core 3 on socket 1 00:06:24.515 EAL: Detected lcore 112 as core 4 on socket 1 00:06:24.515 EAL: Detected lcore 113 as core 5 on socket 1 00:06:24.515 EAL: Detected lcore 114 as core 6 on socket 1 00:06:24.515 EAL: Detected lcore 115 as core 7 on socket 1 00:06:24.515 EAL: Detected lcore 116 as core 8 on socket 1 00:06:24.515 EAL: Detected lcore 117 as core 9 on socket 1 00:06:24.515 EAL: Detected lcore 118 as core 10 on socket 1 00:06:24.515 EAL: Detected lcore 119 as core 11 on socket 1 00:06:24.515 EAL: Detected lcore 120 as core 12 on socket 1 00:06:24.515 EAL: Detected lcore 121 as core 13 on socket 1 00:06:24.515 EAL: Detected lcore 122 as core 14 on socket 1 00:06:24.515 EAL: Detected lcore 123 as core 15 on socket 1 00:06:24.515 EAL: Detected lcore 124 as core 16 on socket 1 00:06:24.515 EAL: Detected lcore 125 as core 17 on socket 1 00:06:24.515 EAL: Detected lcore 126 as core 18 on socket 1 00:06:24.515 EAL: Detected lcore 127 as core 19 on socket 1 00:06:24.515 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:24.515 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:24.515 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:24.515 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:24.515 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:24.515 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:24.515 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:24.515 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:24.515 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:24.515 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:24.515 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:24.515 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:24.515 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:24.515 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:24.515 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:24.515 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:24.515 EAL: Maximum logical cores by configuration: 128 00:06:24.515 EAL: Detected CPU lcores: 128 00:06:24.515 EAL: Detected NUMA nodes: 2 00:06:24.515 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:24.515 EAL: Detected shared linkage of DPDK 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:24.515 EAL: Registered [vdev] bus. 00:06:24.515 EAL: bus.vdev log level changed from disabled to notice 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:24.515 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:24.515 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:24.515 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:24.515 EAL: No shared files mode enabled, IPC will be disabled 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Bus pci wants IOVA as 'DC' 00:06:24.515 EAL: Bus vdev wants IOVA as 'DC' 00:06:24.515 EAL: Buses did not request a specific IOVA mode. 00:06:24.515 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:24.515 EAL: Selected IOVA mode 'VA' 00:06:24.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.515 EAL: Probing VFIO support... 00:06:24.515 EAL: IOMMU type 1 (Type 1) is supported 00:06:24.515 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:24.515 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:24.515 EAL: VFIO support initialized 00:06:24.515 EAL: Ask a virtual area of 0x2e000 bytes 00:06:24.515 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:24.515 EAL: Setting up physically contiguous memory... 00:06:24.515 EAL: Setting maximum number of open files to 524288 00:06:24.515 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:24.515 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:24.515 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:24.515 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:24.515 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.515 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:24.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:24.515 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.515 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:24.515 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:24.515 EAL: Hugepages will be freed exactly as allocated. 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: TSC frequency is ~2400000 KHz 00:06:24.515 EAL: Main lcore 0 is ready (tid=7f4ac6981a00;cpuset=[0]) 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 0 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 2MB 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:24.515 EAL: Mem event callback 'spdk:(nil)' registered 00:06:24.515 00:06:24.515 00:06:24.515 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.515 http://cunit.sourceforge.net/ 00:06:24.515 00:06:24.515 00:06:24.515 Suite: components_suite 00:06:24.515 Test: vtophys_malloc_test ...passed 00:06:24.515 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 4MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 4MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 6MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 6MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 10MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 10MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 18MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 18MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 34MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 34MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 66MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 66MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.515 EAL: Restoring previous memory policy: 4 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was expanded by 130MB 00:06:24.515 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 130MB 00:06:24.516 EAL: Trying to obtain current memory policy. 00:06:24.516 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.516 EAL: Restoring previous memory policy: 4 00:06:24.516 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.516 EAL: request: mp_malloc_sync 00:06:24.516 EAL: No shared files mode enabled, IPC is disabled 00:06:24.516 EAL: Heap on socket 0 was expanded by 258MB 00:06:24.516 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.775 EAL: request: mp_malloc_sync 00:06:24.775 EAL: No shared files mode enabled, IPC is disabled 00:06:24.775 EAL: Heap on socket 0 was shrunk by 258MB 00:06:24.775 EAL: Trying to obtain current memory policy. 00:06:24.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.775 EAL: Restoring previous memory policy: 4 00:06:24.775 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.775 EAL: request: mp_malloc_sync 00:06:24.775 EAL: No shared files mode enabled, IPC is disabled 00:06:24.775 EAL: Heap on socket 0 was expanded by 514MB 00:06:24.775 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.775 EAL: request: mp_malloc_sync 00:06:24.775 EAL: No shared files mode enabled, IPC is disabled 00:06:24.775 EAL: Heap on socket 0 was shrunk by 514MB 00:06:24.775 EAL: Trying to obtain current memory policy. 00:06:24.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.034 EAL: Restoring previous memory policy: 4 00:06:25.035 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.035 EAL: request: mp_malloc_sync 00:06:25.035 EAL: No shared files mode enabled, IPC is disabled 00:06:25.035 EAL: Heap on socket 0 was expanded by 1026MB 00:06:25.035 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.295 EAL: request: mp_malloc_sync 00:06:25.295 EAL: No shared files mode enabled, IPC is disabled 00:06:25.295 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:25.295 passed 00:06:25.295 00:06:25.295 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.295 suites 1 1 n/a 0 0 00:06:25.295 tests 2 2 2 0 0 00:06:25.295 asserts 497 497 497 0 n/a 00:06:25.295 00:06:25.295 Elapsed time = 0.663 seconds 00:06:25.295 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.295 EAL: request: mp_malloc_sync 00:06:25.295 EAL: No shared files mode enabled, IPC is disabled 00:06:25.295 EAL: Heap on socket 0 was shrunk by 2MB 00:06:25.295 EAL: No shared files mode enabled, IPC is disabled 00:06:25.295 EAL: No shared files mode enabled, IPC is disabled 00:06:25.295 EAL: No shared files mode enabled, IPC is disabled 00:06:25.295 00:06:25.295 real 0m0.801s 00:06:25.295 user 0m0.409s 00:06:25.295 sys 0m0.356s 00:06:25.295 10:22:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.295 10:22:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:25.295 ************************************ 00:06:25.295 END TEST env_vtophys 00:06:25.295 ************************************ 00:06:25.295 10:22:30 env -- common/autotest_common.sh@1142 -- # return 0 00:06:25.295 10:22:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:25.295 10:22:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.295 10:22:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.295 10:22:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.295 ************************************ 00:06:25.295 START TEST env_pci 00:06:25.295 ************************************ 00:06:25.295 10:22:30 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:25.295 00:06:25.295 00:06:25.295 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.295 http://cunit.sourceforge.net/ 00:06:25.295 00:06:25.295 00:06:25.295 Suite: pci 00:06:25.295 Test: pci_hook ...[2024-07-22 10:22:30.860125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1728349 has claimed it 00:06:25.295 EAL: Cannot find device (10000:00:01.0) 00:06:25.295 EAL: Failed to attach device on primary process 00:06:25.295 passed 00:06:25.295 00:06:25.295 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.295 suites 1 1 n/a 0 0 00:06:25.295 tests 1 1 1 0 0 00:06:25.295 asserts 25 25 25 0 n/a 00:06:25.295 00:06:25.295 Elapsed time = 0.037 seconds 00:06:25.295 00:06:25.295 real 0m0.055s 00:06:25.295 user 0m0.020s 00:06:25.295 sys 0m0.035s 00:06:25.296 10:22:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.296 10:22:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 END TEST env_pci 00:06:25.296 ************************************ 00:06:25.296 10:22:30 env -- common/autotest_common.sh@1142 -- # return 0 00:06:25.296 10:22:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:25.296 10:22:30 env -- env/env.sh@15 -- # uname 00:06:25.296 10:22:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:25.296 10:22:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:25.296 10:22:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.296 10:22:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:25.296 10:22:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.296 10:22:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 START TEST env_dpdk_post_init 00:06:25.296 ************************************ 00:06:25.296 10:22:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.557 EAL: Detected CPU lcores: 128 00:06:25.557 EAL: Detected NUMA nodes: 2 00:06:25.557 EAL: Detected shared linkage of DPDK 00:06:25.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.557 EAL: Selected IOVA mode 'VA' 00:06:25.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.557 EAL: VFIO support initialized 00:06:25.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.557 EAL: Using IOMMU type 1 (Type 1) 00:06:25.557 EAL: Ignore mapping IO port bar(1) 00:06:25.817 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:25.817 EAL: Ignore mapping IO port bar(1) 00:06:26.077 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:26.077 EAL: Ignore mapping IO port bar(1) 00:06:26.337 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:26.337 EAL: Ignore mapping IO port bar(1) 00:06:26.337 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:26.597 EAL: Ignore mapping IO port bar(1) 00:06:26.597 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:26.857 EAL: Ignore mapping IO port bar(1) 00:06:26.857 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:27.117 EAL: Ignore mapping IO port bar(1) 00:06:27.117 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:27.117 EAL: Ignore mapping IO port bar(1) 00:06:27.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:27.638 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:27.638 EAL: Ignore mapping IO port bar(1) 00:06:27.907 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:27.907 EAL: Ignore mapping IO port bar(1) 00:06:27.907 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:28.168 EAL: Ignore mapping IO port bar(1) 00:06:28.168 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:28.428 EAL: Ignore mapping IO port bar(1) 00:06:28.428 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:28.688 EAL: Ignore mapping IO port bar(1) 00:06:28.688 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:28.688 EAL: Ignore mapping IO port bar(1) 00:06:28.949 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:28.949 EAL: Ignore mapping IO port bar(1) 00:06:29.209 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:29.209 EAL: Ignore mapping IO port bar(1) 00:06:29.469 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:29.469 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:29.469 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:29.469 Starting DPDK initialization... 00:06:29.469 Starting SPDK post initialization... 00:06:29.469 SPDK NVMe probe 00:06:29.469 Attaching to 0000:65:00.0 00:06:29.469 Attached to 0000:65:00.0 00:06:29.469 Cleaning up... 00:06:31.380 00:06:31.380 real 0m5.723s 00:06:31.380 user 0m0.182s 00:06:31.380 sys 0m0.084s 00:06:31.380 10:22:36 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.380 10:22:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.380 ************************************ 00:06:31.380 END TEST env_dpdk_post_init 00:06:31.380 ************************************ 00:06:31.380 10:22:36 env -- common/autotest_common.sh@1142 -- # return 0 00:06:31.380 10:22:36 env -- env/env.sh@26 -- # uname 00:06:31.380 10:22:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:31.380 10:22:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:31.380 10:22:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.380 10:22:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.380 10:22:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.380 ************************************ 00:06:31.380 START TEST env_mem_callbacks 00:06:31.380 ************************************ 00:06:31.380 10:22:36 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:31.380 EAL: Detected CPU lcores: 128 00:06:31.380 EAL: Detected NUMA nodes: 2 00:06:31.380 EAL: Detected shared linkage of DPDK 00:06:31.380 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:31.380 EAL: Selected IOVA mode 'VA' 00:06:31.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.380 EAL: VFIO support initialized 00:06:31.380 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:31.380 00:06:31.380 00:06:31.380 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.380 http://cunit.sourceforge.net/ 00:06:31.380 00:06:31.380 00:06:31.380 Suite: memory 00:06:31.381 Test: test ... 00:06:31.381 register 0x200000200000 2097152 00:06:31.381 malloc 3145728 00:06:31.381 register 0x200000400000 4194304 00:06:31.381 buf 0x200000500000 len 3145728 PASSED 00:06:31.381 malloc 64 00:06:31.381 buf 0x2000004fff40 len 64 PASSED 00:06:31.381 malloc 4194304 00:06:31.381 register 0x200000800000 6291456 00:06:31.381 buf 0x200000a00000 len 4194304 PASSED 00:06:31.381 free 0x200000500000 3145728 00:06:31.381 free 0x2000004fff40 64 00:06:31.381 unregister 0x200000400000 4194304 PASSED 00:06:31.381 free 0x200000a00000 4194304 00:06:31.381 unregister 0x200000800000 6291456 PASSED 00:06:31.381 malloc 8388608 00:06:31.381 register 0x200000400000 10485760 00:06:31.381 buf 0x200000600000 len 8388608 PASSED 00:06:31.381 free 0x200000600000 8388608 00:06:31.381 unregister 0x200000400000 10485760 PASSED 00:06:31.381 passed 00:06:31.381 00:06:31.381 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.381 suites 1 1 n/a 0 0 00:06:31.381 tests 1 1 1 0 0 00:06:31.381 asserts 15 15 15 0 n/a 00:06:31.381 00:06:31.381 Elapsed time = 0.006 seconds 00:06:31.381 00:06:31.381 real 0m0.064s 00:06:31.381 user 0m0.018s 00:06:31.381 sys 0m0.046s 00:06:31.381 10:22:36 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.381 10:22:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:31.381 ************************************ 00:06:31.381 END TEST env_mem_callbacks 00:06:31.381 ************************************ 00:06:31.381 10:22:36 env -- common/autotest_common.sh@1142 -- # return 0 00:06:31.381 00:06:31.381 real 0m7.353s 00:06:31.381 user 0m0.999s 00:06:31.381 sys 0m0.885s 00:06:31.381 10:22:36 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.381 10:22:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.381 ************************************ 00:06:31.381 END TEST env 00:06:31.381 ************************************ 00:06:31.381 10:22:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.381 10:22:36 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:31.381 10:22:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.381 10:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.381 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:31.381 ************************************ 00:06:31.381 START TEST rpc 00:06:31.381 ************************************ 00:06:31.381 10:22:36 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:31.381 * Looking for test storage... 00:06:31.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:31.640 10:22:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1729770 00:06:31.641 10:22:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.641 10:22:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:31.641 10:22:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1729770 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@829 -- # '[' -z 1729770 ']' 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.641 10:22:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.641 [2024-07-22 10:22:37.138016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:31.641 [2024-07-22 10:22:37.138085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729770 ] 00:06:31.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.641 [2024-07-22 10:22:37.211314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.641 [2024-07-22 10:22:37.250864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:31.641 [2024-07-22 10:22:37.250908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1729770' to capture a snapshot of events at runtime. 00:06:31.641 [2024-07-22 10:22:37.250916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.641 [2024-07-22 10:22:37.250923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.641 [2024-07-22 10:22:37.250928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1729770 for offline analysis/debug. 00:06:31.641 [2024-07-22 10:22:37.250951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.581 10:22:37 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.581 10:22:37 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.581 10:22:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:32.581 10:22:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:32.581 10:22:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:32.581 10:22:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:32.581 10:22:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.581 10:22:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.581 10:22:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 ************************************ 00:06:32.581 START TEST rpc_integrity 00:06:32.581 ************************************ 00:06:32.581 10:22:37 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:32.581 10:22:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:32.581 10:22:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.581 10:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 10:22:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.581 10:22:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:32.581 10:22:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:32.581 { 00:06:32.581 "name": "Malloc0", 00:06:32.581 "aliases": [ 00:06:32.581 "4615381e-4d1c-40cc-945e-fadb310b0452" 00:06:32.581 ], 00:06:32.581 "product_name": "Malloc disk", 00:06:32.581 "block_size": 512, 00:06:32.581 "num_blocks": 16384, 00:06:32.581 "uuid": "4615381e-4d1c-40cc-945e-fadb310b0452", 00:06:32.581 "assigned_rate_limits": { 00:06:32.581 "rw_ios_per_sec": 0, 00:06:32.581 "rw_mbytes_per_sec": 0, 00:06:32.581 "r_mbytes_per_sec": 0, 00:06:32.581 "w_mbytes_per_sec": 0 00:06:32.581 }, 00:06:32.581 "claimed": false, 00:06:32.581 "zoned": false, 00:06:32.581 "supported_io_types": { 00:06:32.581 "read": true, 00:06:32.581 "write": true, 00:06:32.581 "unmap": true, 00:06:32.581 "flush": true, 00:06:32.581 "reset": true, 00:06:32.581 "nvme_admin": false, 00:06:32.581 "nvme_io": false, 00:06:32.581 "nvme_io_md": false, 00:06:32.581 "write_zeroes": true, 00:06:32.581 "zcopy": true, 00:06:32.581 "get_zone_info": false, 00:06:32.581 "zone_management": false, 00:06:32.581 "zone_append": false, 00:06:32.581 "compare": false, 00:06:32.581 "compare_and_write": false, 00:06:32.581 "abort": true, 00:06:32.581 "seek_hole": false, 00:06:32.581 "seek_data": false, 00:06:32.581 "copy": true, 00:06:32.581 "nvme_iov_md": false 00:06:32.581 }, 00:06:32.581 "memory_domains": [ 00:06:32.581 { 00:06:32.581 "dma_device_id": "system", 00:06:32.581 "dma_device_type": 1 00:06:32.581 }, 00:06:32.581 { 00:06:32.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.581 "dma_device_type": 2 00:06:32.581 } 00:06:32.581 ], 00:06:32.581 "driver_specific": {} 00:06:32.581 } 00:06:32.581 ]' 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 [2024-07-22 10:22:38.090775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:32.581 [2024-07-22 10:22:38.090806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.581 [2024-07-22 10:22:38.090819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109ae70 00:06:32.581 [2024-07-22 10:22:38.090826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.581 [2024-07-22 10:22:38.092156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.581 [2024-07-22 10:22:38.092177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:32.581 Passthru0 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.581 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.581 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:32.581 { 00:06:32.581 "name": "Malloc0", 00:06:32.581 "aliases": [ 00:06:32.581 "4615381e-4d1c-40cc-945e-fadb310b0452" 00:06:32.581 ], 00:06:32.581 "product_name": "Malloc disk", 00:06:32.581 "block_size": 512, 00:06:32.581 "num_blocks": 16384, 00:06:32.581 "uuid": "4615381e-4d1c-40cc-945e-fadb310b0452", 00:06:32.581 "assigned_rate_limits": { 00:06:32.581 "rw_ios_per_sec": 0, 00:06:32.581 "rw_mbytes_per_sec": 0, 00:06:32.581 "r_mbytes_per_sec": 0, 00:06:32.581 "w_mbytes_per_sec": 0 00:06:32.581 }, 00:06:32.581 "claimed": true, 00:06:32.581 "claim_type": "exclusive_write", 00:06:32.581 "zoned": false, 00:06:32.581 "supported_io_types": { 00:06:32.581 "read": true, 00:06:32.581 "write": true, 00:06:32.581 "unmap": true, 00:06:32.581 "flush": true, 00:06:32.581 "reset": true, 00:06:32.581 "nvme_admin": false, 00:06:32.581 "nvme_io": false, 00:06:32.581 "nvme_io_md": false, 00:06:32.581 "write_zeroes": true, 00:06:32.581 "zcopy": true, 00:06:32.581 "get_zone_info": false, 00:06:32.581 "zone_management": false, 00:06:32.581 "zone_append": false, 00:06:32.581 "compare": false, 00:06:32.581 "compare_and_write": false, 00:06:32.581 "abort": true, 00:06:32.581 "seek_hole": false, 00:06:32.581 "seek_data": false, 00:06:32.581 "copy": true, 00:06:32.581 "nvme_iov_md": false 00:06:32.581 }, 00:06:32.581 "memory_domains": [ 00:06:32.581 { 00:06:32.581 "dma_device_id": "system", 00:06:32.581 "dma_device_type": 1 00:06:32.581 }, 00:06:32.581 { 00:06:32.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.581 "dma_device_type": 2 00:06:32.581 } 00:06:32.581 ], 00:06:32.581 "driver_specific": {} 00:06:32.581 }, 00:06:32.581 { 00:06:32.581 "name": "Passthru0", 00:06:32.581 "aliases": [ 00:06:32.581 "71554af9-19bb-5c4b-ae9b-6f8e52c7de00" 00:06:32.581 ], 00:06:32.581 "product_name": "passthru", 00:06:32.581 "block_size": 512, 00:06:32.581 "num_blocks": 16384, 00:06:32.581 "uuid": "71554af9-19bb-5c4b-ae9b-6f8e52c7de00", 00:06:32.581 "assigned_rate_limits": { 00:06:32.581 "rw_ios_per_sec": 0, 00:06:32.581 "rw_mbytes_per_sec": 0, 00:06:32.581 "r_mbytes_per_sec": 0, 00:06:32.581 "w_mbytes_per_sec": 0 00:06:32.581 }, 00:06:32.581 "claimed": false, 00:06:32.581 "zoned": false, 00:06:32.581 "supported_io_types": { 00:06:32.581 "read": true, 00:06:32.581 "write": true, 00:06:32.581 "unmap": true, 00:06:32.581 "flush": true, 00:06:32.581 "reset": true, 00:06:32.581 "nvme_admin": false, 00:06:32.581 "nvme_io": false, 00:06:32.581 "nvme_io_md": false, 00:06:32.581 "write_zeroes": true, 00:06:32.582 "zcopy": true, 00:06:32.582 "get_zone_info": false, 00:06:32.582 "zone_management": false, 00:06:32.582 "zone_append": false, 00:06:32.582 "compare": false, 00:06:32.582 "compare_and_write": false, 00:06:32.582 "abort": true, 00:06:32.582 "seek_hole": false, 00:06:32.582 "seek_data": false, 00:06:32.582 "copy": true, 00:06:32.582 "nvme_iov_md": false 00:06:32.582 }, 00:06:32.582 "memory_domains": [ 00:06:32.582 { 00:06:32.582 "dma_device_id": "system", 00:06:32.582 "dma_device_type": 1 00:06:32.582 }, 00:06:32.582 { 00:06:32.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.582 "dma_device_type": 2 00:06:32.582 } 00:06:32.582 ], 00:06:32.582 "driver_specific": { 00:06:32.582 "passthru": { 00:06:32.582 "name": "Passthru0", 00:06:32.582 "base_bdev_name": "Malloc0" 00:06:32.582 } 00:06:32.582 } 00:06:32.582 } 00:06:32.582 ]' 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:32.582 10:22:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:32.582 00:06:32.582 real 0m0.291s 00:06:32.582 user 0m0.186s 00:06:32.582 sys 0m0.037s 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.582 10:22:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.582 ************************************ 00:06:32.582 END TEST rpc_integrity 00:06:32.582 ************************************ 00:06:32.582 10:22:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:32.582 10:22:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:32.582 10:22:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.582 10:22:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.582 10:22:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 ************************************ 00:06:32.843 START TEST rpc_plugins 00:06:32.843 ************************************ 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:32.843 { 00:06:32.843 "name": "Malloc1", 00:06:32.843 "aliases": [ 00:06:32.843 "09a329d5-41d8-4fd8-aec7-06e25f43890e" 00:06:32.843 ], 00:06:32.843 "product_name": "Malloc disk", 00:06:32.843 "block_size": 4096, 00:06:32.843 "num_blocks": 256, 00:06:32.843 "uuid": "09a329d5-41d8-4fd8-aec7-06e25f43890e", 00:06:32.843 "assigned_rate_limits": { 00:06:32.843 "rw_ios_per_sec": 0, 00:06:32.843 "rw_mbytes_per_sec": 0, 00:06:32.843 "r_mbytes_per_sec": 0, 00:06:32.843 "w_mbytes_per_sec": 0 00:06:32.843 }, 00:06:32.843 "claimed": false, 00:06:32.843 "zoned": false, 00:06:32.843 "supported_io_types": { 00:06:32.843 "read": true, 00:06:32.843 "write": true, 00:06:32.843 "unmap": true, 00:06:32.843 "flush": true, 00:06:32.843 "reset": true, 00:06:32.843 "nvme_admin": false, 00:06:32.843 "nvme_io": false, 00:06:32.843 "nvme_io_md": false, 00:06:32.843 "write_zeroes": true, 00:06:32.843 "zcopy": true, 00:06:32.843 "get_zone_info": false, 00:06:32.843 "zone_management": false, 00:06:32.843 "zone_append": false, 00:06:32.843 "compare": false, 00:06:32.843 "compare_and_write": false, 00:06:32.843 "abort": true, 00:06:32.843 "seek_hole": false, 00:06:32.843 "seek_data": false, 00:06:32.843 "copy": true, 00:06:32.843 "nvme_iov_md": false 00:06:32.843 }, 00:06:32.843 "memory_domains": [ 00:06:32.843 { 00:06:32.843 "dma_device_id": "system", 00:06:32.843 "dma_device_type": 1 00:06:32.843 }, 00:06:32.843 { 00:06:32.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.843 "dma_device_type": 2 00:06:32.843 } 00:06:32.843 ], 00:06:32.843 "driver_specific": {} 00:06:32.843 } 00:06:32.843 ]' 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:32.843 10:22:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:32.843 00:06:32.843 real 0m0.140s 00:06:32.843 user 0m0.085s 00:06:32.843 sys 0m0.018s 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.843 10:22:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.843 ************************************ 00:06:32.843 END TEST rpc_plugins 00:06:32.843 ************************************ 00:06:32.843 10:22:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:32.843 10:22:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:32.843 10:22:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.844 10:22:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.844 10:22:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.844 ************************************ 00:06:32.844 START TEST rpc_trace_cmd_test 00:06:32.844 ************************************ 00:06:32.844 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:32.844 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:32.844 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:32.844 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.844 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:33.104 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1729770", 00:06:33.104 "tpoint_group_mask": "0x8", 00:06:33.104 "iscsi_conn": { 00:06:33.104 "mask": "0x2", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "scsi": { 00:06:33.104 "mask": "0x4", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "bdev": { 00:06:33.104 "mask": "0x8", 00:06:33.104 "tpoint_mask": "0xffffffffffffffff" 00:06:33.104 }, 00:06:33.104 "nvmf_rdma": { 00:06:33.104 "mask": "0x10", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "nvmf_tcp": { 00:06:33.104 "mask": "0x20", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "ftl": { 00:06:33.104 "mask": "0x40", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "blobfs": { 00:06:33.104 "mask": "0x80", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "dsa": { 00:06:33.104 "mask": "0x200", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "thread": { 00:06:33.104 "mask": "0x400", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "nvme_pcie": { 00:06:33.104 "mask": "0x800", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "iaa": { 00:06:33.104 "mask": "0x1000", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "nvme_tcp": { 00:06:33.104 "mask": "0x2000", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "bdev_nvme": { 00:06:33.104 "mask": "0x4000", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 }, 00:06:33.104 "sock": { 00:06:33.104 "mask": "0x8000", 00:06:33.104 "tpoint_mask": "0x0" 00:06:33.104 } 00:06:33.104 }' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:33.104 00:06:33.104 real 0m0.244s 00:06:33.104 user 0m0.208s 00:06:33.104 sys 0m0.028s 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.104 10:22:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.104 ************************************ 00:06:33.104 END TEST rpc_trace_cmd_test 00:06:33.104 ************************************ 00:06:33.365 10:22:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.365 10:22:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:33.365 10:22:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:33.365 10:22:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:33.365 10:22:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.365 10:22:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.365 10:22:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 ************************************ 00:06:33.365 START TEST rpc_daemon_integrity 00:06:33.365 ************************************ 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:33.365 { 00:06:33.365 "name": "Malloc2", 00:06:33.365 "aliases": [ 00:06:33.365 "3cbbda49-1a23-429f-b53c-7451fc33adf8" 00:06:33.365 ], 00:06:33.365 "product_name": "Malloc disk", 00:06:33.365 "block_size": 512, 00:06:33.365 "num_blocks": 16384, 00:06:33.365 "uuid": "3cbbda49-1a23-429f-b53c-7451fc33adf8", 00:06:33.365 "assigned_rate_limits": { 00:06:33.365 "rw_ios_per_sec": 0, 00:06:33.365 "rw_mbytes_per_sec": 0, 00:06:33.365 "r_mbytes_per_sec": 0, 00:06:33.365 "w_mbytes_per_sec": 0 00:06:33.365 }, 00:06:33.365 "claimed": false, 00:06:33.365 "zoned": false, 00:06:33.365 "supported_io_types": { 00:06:33.365 "read": true, 00:06:33.365 "write": true, 00:06:33.365 "unmap": true, 00:06:33.365 "flush": true, 00:06:33.365 "reset": true, 00:06:33.365 "nvme_admin": false, 00:06:33.365 "nvme_io": false, 00:06:33.365 "nvme_io_md": false, 00:06:33.365 "write_zeroes": true, 00:06:33.365 "zcopy": true, 00:06:33.365 "get_zone_info": false, 00:06:33.365 "zone_management": false, 00:06:33.365 "zone_append": false, 00:06:33.365 "compare": false, 00:06:33.365 "compare_and_write": false, 00:06:33.365 "abort": true, 00:06:33.365 "seek_hole": false, 00:06:33.365 "seek_data": false, 00:06:33.365 "copy": true, 00:06:33.365 "nvme_iov_md": false 00:06:33.365 }, 00:06:33.365 "memory_domains": [ 00:06:33.365 { 00:06:33.365 "dma_device_id": "system", 00:06:33.365 "dma_device_type": 1 00:06:33.365 }, 00:06:33.365 { 00:06:33.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.365 "dma_device_type": 2 00:06:33.365 } 00:06:33.365 ], 00:06:33.365 "driver_specific": {} 00:06:33.365 } 00:06:33.365 ]' 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 [2024-07-22 10:22:38.989204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:33.365 [2024-07-22 10:22:38.989233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.365 [2024-07-22 10:22:38.989247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x108ca90 00:06:33.365 [2024-07-22 10:22:38.989253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.365 [2024-07-22 10:22:38.990491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.365 [2024-07-22 10:22:38.990511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:33.365 Passthru0 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.365 10:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.365 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.365 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:33.365 { 00:06:33.365 "name": "Malloc2", 00:06:33.365 "aliases": [ 00:06:33.365 "3cbbda49-1a23-429f-b53c-7451fc33adf8" 00:06:33.365 ], 00:06:33.365 "product_name": "Malloc disk", 00:06:33.365 "block_size": 512, 00:06:33.365 "num_blocks": 16384, 00:06:33.365 "uuid": "3cbbda49-1a23-429f-b53c-7451fc33adf8", 00:06:33.365 "assigned_rate_limits": { 00:06:33.365 "rw_ios_per_sec": 0, 00:06:33.365 "rw_mbytes_per_sec": 0, 00:06:33.365 "r_mbytes_per_sec": 0, 00:06:33.365 "w_mbytes_per_sec": 0 00:06:33.365 }, 00:06:33.365 "claimed": true, 00:06:33.365 "claim_type": "exclusive_write", 00:06:33.365 "zoned": false, 00:06:33.365 "supported_io_types": { 00:06:33.365 "read": true, 00:06:33.365 "write": true, 00:06:33.365 "unmap": true, 00:06:33.365 "flush": true, 00:06:33.365 "reset": true, 00:06:33.365 "nvme_admin": false, 00:06:33.365 "nvme_io": false, 00:06:33.365 "nvme_io_md": false, 00:06:33.365 "write_zeroes": true, 00:06:33.365 "zcopy": true, 00:06:33.365 "get_zone_info": false, 00:06:33.365 "zone_management": false, 00:06:33.365 "zone_append": false, 00:06:33.365 "compare": false, 00:06:33.365 "compare_and_write": false, 00:06:33.365 "abort": true, 00:06:33.365 "seek_hole": false, 00:06:33.365 "seek_data": false, 00:06:33.365 "copy": true, 00:06:33.365 "nvme_iov_md": false 00:06:33.365 }, 00:06:33.365 "memory_domains": [ 00:06:33.365 { 00:06:33.365 "dma_device_id": "system", 00:06:33.365 "dma_device_type": 1 00:06:33.365 }, 00:06:33.365 { 00:06:33.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.365 "dma_device_type": 2 00:06:33.365 } 00:06:33.365 ], 00:06:33.365 "driver_specific": {} 00:06:33.365 }, 00:06:33.365 { 00:06:33.365 "name": "Passthru0", 00:06:33.365 "aliases": [ 00:06:33.365 "3b028cec-4b09-5dfc-ad91-50461be884d8" 00:06:33.365 ], 00:06:33.365 "product_name": "passthru", 00:06:33.365 "block_size": 512, 00:06:33.365 "num_blocks": 16384, 00:06:33.365 "uuid": "3b028cec-4b09-5dfc-ad91-50461be884d8", 00:06:33.365 "assigned_rate_limits": { 00:06:33.365 "rw_ios_per_sec": 0, 00:06:33.365 "rw_mbytes_per_sec": 0, 00:06:33.365 "r_mbytes_per_sec": 0, 00:06:33.365 "w_mbytes_per_sec": 0 00:06:33.365 }, 00:06:33.365 "claimed": false, 00:06:33.365 "zoned": false, 00:06:33.365 "supported_io_types": { 00:06:33.365 "read": true, 00:06:33.365 "write": true, 00:06:33.365 "unmap": true, 00:06:33.365 "flush": true, 00:06:33.365 "reset": true, 00:06:33.365 "nvme_admin": false, 00:06:33.365 "nvme_io": false, 00:06:33.365 "nvme_io_md": false, 00:06:33.365 "write_zeroes": true, 00:06:33.365 "zcopy": true, 00:06:33.365 "get_zone_info": false, 00:06:33.365 "zone_management": false, 00:06:33.365 "zone_append": false, 00:06:33.365 "compare": false, 00:06:33.365 "compare_and_write": false, 00:06:33.365 "abort": true, 00:06:33.365 "seek_hole": false, 00:06:33.365 "seek_data": false, 00:06:33.365 "copy": true, 00:06:33.365 "nvme_iov_md": false 00:06:33.365 }, 00:06:33.365 "memory_domains": [ 00:06:33.365 { 00:06:33.365 "dma_device_id": "system", 00:06:33.365 "dma_device_type": 1 00:06:33.365 }, 00:06:33.365 { 00:06:33.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.366 "dma_device_type": 2 00:06:33.366 } 00:06:33.366 ], 00:06:33.366 "driver_specific": { 00:06:33.366 "passthru": { 00:06:33.366 "name": "Passthru0", 00:06:33.366 "base_bdev_name": "Malloc2" 00:06:33.366 } 00:06:33.366 } 00:06:33.366 } 00:06:33.366 ]' 00:06:33.366 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:33.627 00:06:33.627 real 0m0.292s 00:06:33.627 user 0m0.189s 00:06:33.627 sys 0m0.042s 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.627 10:22:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 ************************************ 00:06:33.627 END TEST rpc_daemon_integrity 00:06:33.627 ************************************ 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.627 10:22:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:33.627 10:22:39 rpc -- rpc/rpc.sh@84 -- # killprocess 1729770 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@948 -- # '[' -z 1729770 ']' 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@952 -- # kill -0 1729770 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@953 -- # uname 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1729770 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1729770' 00:06:33.627 killing process with pid 1729770 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@967 -- # kill 1729770 00:06:33.627 10:22:39 rpc -- common/autotest_common.sh@972 -- # wait 1729770 00:06:33.888 00:06:33.888 real 0m2.450s 00:06:33.888 user 0m3.209s 00:06:33.888 sys 0m0.705s 00:06:33.888 10:22:39 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.888 10:22:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.888 ************************************ 00:06:33.888 END TEST rpc 00:06:33.888 ************************************ 00:06:33.888 10:22:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.888 10:22:39 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:33.888 10:22:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.888 10:22:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.888 10:22:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.888 ************************************ 00:06:33.888 START TEST skip_rpc 00:06:33.888 ************************************ 00:06:33.888 10:22:39 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:34.150 * Looking for test storage... 00:06:34.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:34.150 10:22:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:34.150 10:22:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:34.150 10:22:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:34.150 10:22:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.150 10:22:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.150 10:22:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.150 ************************************ 00:06:34.150 START TEST skip_rpc 00:06:34.150 ************************************ 00:06:34.150 10:22:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:34.150 10:22:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1730316 00:06:34.150 10:22:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.150 10:22:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:34.150 10:22:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:34.150 [2024-07-22 10:22:39.689060] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:34.150 [2024-07-22 10:22:39.689107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730316 ] 00:06:34.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.150 [2024-07-22 10:22:39.754281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.150 [2024-07-22 10:22:39.785181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1730316 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1730316 ']' 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1730316 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1730316 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1730316' 00:06:39.472 killing process with pid 1730316 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1730316 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1730316 00:06:39.472 00:06:39.472 real 0m5.262s 00:06:39.472 user 0m5.061s 00:06:39.472 sys 0m0.228s 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.472 10:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 ************************************ 00:06:39.472 END TEST skip_rpc 00:06:39.472 ************************************ 00:06:39.472 10:22:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:39.472 10:22:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:39.472 10:22:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.472 10:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.472 10:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 ************************************ 00:06:39.472 START TEST skip_rpc_with_json 00:06:39.472 ************************************ 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1731380 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1731380 00:06:39.472 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1731380 ']' 00:06:39.473 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.473 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.473 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.473 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.473 10:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.473 [2024-07-22 10:22:45.031380] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:39.473 [2024-07-22 10:22:45.031455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731380 ] 00:06:39.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.473 [2024-07-22 10:22:45.099689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.765 [2024-07-22 10:22:45.137663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.367 [2024-07-22 10:22:45.790495] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:40.367 request: 00:06:40.367 { 00:06:40.367 "trtype": "tcp", 00:06:40.367 "method": "nvmf_get_transports", 00:06:40.367 "req_id": 1 00:06:40.367 } 00:06:40.367 Got JSON-RPC error response 00:06:40.367 response: 00:06:40.367 { 00:06:40.367 "code": -19, 00:06:40.367 "message": "No such device" 00:06:40.367 } 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.367 [2024-07-22 10:22:45.802622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.367 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:40.367 { 00:06:40.367 "subsystems": [ 00:06:40.367 { 00:06:40.367 "subsystem": "vfio_user_target", 00:06:40.367 "config": null 00:06:40.367 }, 00:06:40.367 { 00:06:40.367 "subsystem": "keyring", 00:06:40.367 "config": [] 00:06:40.367 }, 00:06:40.367 { 00:06:40.367 "subsystem": "iobuf", 00:06:40.367 "config": [ 00:06:40.367 { 00:06:40.367 "method": "iobuf_set_options", 00:06:40.367 "params": { 00:06:40.367 "small_pool_count": 8192, 00:06:40.367 "large_pool_count": 1024, 00:06:40.367 "small_bufsize": 8192, 00:06:40.367 "large_bufsize": 135168 00:06:40.367 } 00:06:40.367 } 00:06:40.367 ] 00:06:40.367 }, 00:06:40.367 { 00:06:40.367 "subsystem": "sock", 00:06:40.367 "config": [ 00:06:40.367 { 00:06:40.367 "method": "sock_set_default_impl", 00:06:40.367 "params": { 00:06:40.367 "impl_name": "posix" 00:06:40.367 } 00:06:40.367 }, 00:06:40.367 { 00:06:40.367 "method": "sock_impl_set_options", 00:06:40.367 "params": { 00:06:40.367 "impl_name": "ssl", 00:06:40.367 "recv_buf_size": 4096, 00:06:40.367 "send_buf_size": 4096, 00:06:40.367 "enable_recv_pipe": true, 00:06:40.367 "enable_quickack": false, 00:06:40.367 "enable_placement_id": 0, 00:06:40.367 "enable_zerocopy_send_server": true, 00:06:40.367 "enable_zerocopy_send_client": false, 00:06:40.367 "zerocopy_threshold": 0, 00:06:40.367 "tls_version": 0, 00:06:40.367 "enable_ktls": false 00:06:40.367 } 00:06:40.367 }, 00:06:40.367 { 00:06:40.367 "method": "sock_impl_set_options", 00:06:40.367 "params": { 00:06:40.368 "impl_name": "posix", 00:06:40.368 "recv_buf_size": 2097152, 00:06:40.368 "send_buf_size": 2097152, 00:06:40.368 "enable_recv_pipe": true, 00:06:40.368 "enable_quickack": false, 00:06:40.368 "enable_placement_id": 0, 00:06:40.368 "enable_zerocopy_send_server": true, 00:06:40.368 "enable_zerocopy_send_client": false, 00:06:40.368 "zerocopy_threshold": 0, 00:06:40.368 "tls_version": 0, 00:06:40.368 "enable_ktls": false 00:06:40.368 } 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "vmd", 00:06:40.368 "config": [] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "accel", 00:06:40.368 "config": [ 00:06:40.368 { 00:06:40.368 "method": "accel_set_options", 00:06:40.368 "params": { 00:06:40.368 "small_cache_size": 128, 00:06:40.368 "large_cache_size": 16, 00:06:40.368 "task_count": 2048, 00:06:40.368 "sequence_count": 2048, 00:06:40.368 "buf_count": 2048 00:06:40.368 } 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "bdev", 00:06:40.368 "config": [ 00:06:40.368 { 00:06:40.368 "method": "bdev_set_options", 00:06:40.368 "params": { 00:06:40.368 "bdev_io_pool_size": 65535, 00:06:40.368 "bdev_io_cache_size": 256, 00:06:40.368 "bdev_auto_examine": true, 00:06:40.368 "iobuf_small_cache_size": 128, 00:06:40.368 "iobuf_large_cache_size": 16 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "bdev_raid_set_options", 00:06:40.368 "params": { 00:06:40.368 "process_window_size_kb": 1024, 00:06:40.368 "process_max_bandwidth_mb_sec": 0 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "bdev_iscsi_set_options", 00:06:40.368 "params": { 00:06:40.368 "timeout_sec": 30 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "bdev_nvme_set_options", 00:06:40.368 "params": { 00:06:40.368 "action_on_timeout": "none", 00:06:40.368 "timeout_us": 0, 00:06:40.368 "timeout_admin_us": 0, 00:06:40.368 "keep_alive_timeout_ms": 10000, 00:06:40.368 "arbitration_burst": 0, 00:06:40.368 "low_priority_weight": 0, 00:06:40.368 "medium_priority_weight": 0, 00:06:40.368 "high_priority_weight": 0, 00:06:40.368 "nvme_adminq_poll_period_us": 10000, 00:06:40.368 "nvme_ioq_poll_period_us": 0, 00:06:40.368 "io_queue_requests": 0, 00:06:40.368 "delay_cmd_submit": true, 00:06:40.368 "transport_retry_count": 4, 00:06:40.368 "bdev_retry_count": 3, 00:06:40.368 "transport_ack_timeout": 0, 00:06:40.368 "ctrlr_loss_timeout_sec": 0, 00:06:40.368 "reconnect_delay_sec": 0, 00:06:40.368 "fast_io_fail_timeout_sec": 0, 00:06:40.368 "disable_auto_failback": false, 00:06:40.368 "generate_uuids": false, 00:06:40.368 "transport_tos": 0, 00:06:40.368 "nvme_error_stat": false, 00:06:40.368 "rdma_srq_size": 0, 00:06:40.368 "io_path_stat": false, 00:06:40.368 "allow_accel_sequence": false, 00:06:40.368 "rdma_max_cq_size": 0, 00:06:40.368 "rdma_cm_event_timeout_ms": 0, 00:06:40.368 "dhchap_digests": [ 00:06:40.368 "sha256", 00:06:40.368 "sha384", 00:06:40.368 "sha512" 00:06:40.368 ], 00:06:40.368 "dhchap_dhgroups": [ 00:06:40.368 "null", 00:06:40.368 "ffdhe2048", 00:06:40.368 "ffdhe3072", 00:06:40.368 "ffdhe4096", 00:06:40.368 "ffdhe6144", 00:06:40.368 "ffdhe8192" 00:06:40.368 ] 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "bdev_nvme_set_hotplug", 00:06:40.368 "params": { 00:06:40.368 "period_us": 100000, 00:06:40.368 "enable": false 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "bdev_wait_for_examine" 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "scsi", 00:06:40.368 "config": null 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "scheduler", 00:06:40.368 "config": [ 00:06:40.368 { 00:06:40.368 "method": "framework_set_scheduler", 00:06:40.368 "params": { 00:06:40.368 "name": "static" 00:06:40.368 } 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "vhost_scsi", 00:06:40.368 "config": [] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "vhost_blk", 00:06:40.368 "config": [] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "ublk", 00:06:40.368 "config": [] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "nbd", 00:06:40.368 "config": [] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "nvmf", 00:06:40.368 "config": [ 00:06:40.368 { 00:06:40.368 "method": "nvmf_set_config", 00:06:40.368 "params": { 00:06:40.368 "discovery_filter": "match_any", 00:06:40.368 "admin_cmd_passthru": { 00:06:40.368 "identify_ctrlr": false 00:06:40.368 } 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "nvmf_set_max_subsystems", 00:06:40.368 "params": { 00:06:40.368 "max_subsystems": 1024 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "nvmf_set_crdt", 00:06:40.368 "params": { 00:06:40.368 "crdt1": 0, 00:06:40.368 "crdt2": 0, 00:06:40.368 "crdt3": 0 00:06:40.368 } 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "method": "nvmf_create_transport", 00:06:40.368 "params": { 00:06:40.368 "trtype": "TCP", 00:06:40.368 "max_queue_depth": 128, 00:06:40.368 "max_io_qpairs_per_ctrlr": 127, 00:06:40.368 "in_capsule_data_size": 4096, 00:06:40.368 "max_io_size": 131072, 00:06:40.368 "io_unit_size": 131072, 00:06:40.368 "max_aq_depth": 128, 00:06:40.368 "num_shared_buffers": 511, 00:06:40.368 "buf_cache_size": 4294967295, 00:06:40.368 "dif_insert_or_strip": false, 00:06:40.368 "zcopy": false, 00:06:40.368 "c2h_success": true, 00:06:40.368 "sock_priority": 0, 00:06:40.368 "abort_timeout_sec": 1, 00:06:40.368 "ack_timeout": 0, 00:06:40.368 "data_wr_pool_size": 0 00:06:40.368 } 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 }, 00:06:40.368 { 00:06:40.368 "subsystem": "iscsi", 00:06:40.368 "config": [ 00:06:40.368 { 00:06:40.368 "method": "iscsi_set_options", 00:06:40.368 "params": { 00:06:40.368 "node_base": "iqn.2016-06.io.spdk", 00:06:40.368 "max_sessions": 128, 00:06:40.368 "max_connections_per_session": 2, 00:06:40.368 "max_queue_depth": 64, 00:06:40.368 "default_time2wait": 2, 00:06:40.368 "default_time2retain": 20, 00:06:40.368 "first_burst_length": 8192, 00:06:40.368 "immediate_data": true, 00:06:40.368 "allow_duplicated_isid": false, 00:06:40.368 "error_recovery_level": 0, 00:06:40.368 "nop_timeout": 60, 00:06:40.368 "nop_in_interval": 30, 00:06:40.368 "disable_chap": false, 00:06:40.368 "require_chap": false, 00:06:40.368 "mutual_chap": false, 00:06:40.368 "chap_group": 0, 00:06:40.368 "max_large_datain_per_connection": 64, 00:06:40.368 "max_r2t_per_connection": 4, 00:06:40.368 "pdu_pool_size": 36864, 00:06:40.368 "immediate_data_pool_size": 16384, 00:06:40.368 "data_out_pool_size": 2048 00:06:40.368 } 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 } 00:06:40.368 ] 00:06:40.368 } 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1731380 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1731380 ']' 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1731380 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.368 10:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731380 00:06:40.368 10:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.368 10:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.368 10:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731380' 00:06:40.368 killing process with pid 1731380 00:06:40.368 10:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1731380 00:06:40.368 10:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1731380 00:06:40.629 10:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1731704 00:06:40.629 10:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:40.629 10:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1731704 ']' 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731704' 00:06:45.908 killing process with pid 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1731704 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:45.908 00:06:45.908 real 0m6.508s 00:06:45.908 user 0m6.352s 00:06:45.908 sys 0m0.550s 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.908 10:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.908 ************************************ 00:06:45.908 END TEST skip_rpc_with_json 00:06:45.908 ************************************ 00:06:45.908 10:22:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:45.908 10:22:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:45.908 10:22:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.908 10:22:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.909 10:22:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 ************************************ 00:06:45.909 START TEST skip_rpc_with_delay 00:06:45.909 ************************************ 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:45.909 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:46.170 [2024-07-22 10:22:51.627513] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:46.170 [2024-07-22 10:22:51.627606] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.170 00:06:46.170 real 0m0.085s 00:06:46.170 user 0m0.054s 00:06:46.170 sys 0m0.030s 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.170 10:22:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 END TEST skip_rpc_with_delay 00:06:46.170 ************************************ 00:06:46.170 10:22:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:46.170 10:22:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:46.170 10:22:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:46.170 10:22:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:46.170 10:22:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.170 10:22:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.170 10:22:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 START TEST exit_on_failed_rpc_init 00:06:46.170 ************************************ 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1732828 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1732828 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1732828 ']' 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.170 10:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 [2024-07-22 10:22:51.781624] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:46.170 [2024-07-22 10:22:51.781686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732828 ] 00:06:46.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.170 [2024-07-22 10:22:51.851731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.430 [2024-07-22 10:22:51.891664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:46.999 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:46.999 [2024-07-22 10:22:52.605794] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:46.999 [2024-07-22 10:22:52.605847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733096 ] 00:06:46.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.999 [2024-07-22 10:22:52.686675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.259 [2024-07-22 10:22:52.717547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.259 [2024-07-22 10:22:52.717609] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:47.259 [2024-07-22 10:22:52.717618] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:47.259 [2024-07-22 10:22:52.717624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1732828 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1732828 ']' 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1732828 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1732828 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1732828' 00:06:47.259 killing process with pid 1732828 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1732828 00:06:47.259 10:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1732828 00:06:47.568 00:06:47.568 real 0m1.284s 00:06:47.568 user 0m1.460s 00:06:47.568 sys 0m0.384s 00:06:47.568 10:22:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.568 10:22:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:47.568 ************************************ 00:06:47.568 END TEST exit_on_failed_rpc_init 00:06:47.568 ************************************ 00:06:47.568 10:22:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:47.568 10:22:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:47.568 00:06:47.568 real 0m13.547s 00:06:47.568 user 0m13.095s 00:06:47.568 sys 0m1.456s 00:06:47.568 10:22:53 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.568 10:22:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.568 ************************************ 00:06:47.568 END TEST skip_rpc 00:06:47.568 ************************************ 00:06:47.568 10:22:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.568 10:22:53 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:47.568 10:22:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.568 10:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.568 10:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.568 ************************************ 00:06:47.568 START TEST rpc_client 00:06:47.568 ************************************ 00:06:47.568 10:22:53 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:47.568 * Looking for test storage... 00:06:47.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:47.829 10:22:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:47.829 OK 00:06:47.829 10:22:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:47.829 00:06:47.829 real 0m0.122s 00:06:47.829 user 0m0.054s 00:06:47.829 sys 0m0.074s 00:06:47.829 10:22:53 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.829 10:22:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:47.829 ************************************ 00:06:47.829 END TEST rpc_client 00:06:47.829 ************************************ 00:06:47.829 10:22:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.829 10:22:53 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:47.829 10:22:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.829 10:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.829 10:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.829 ************************************ 00:06:47.829 START TEST json_config 00:06:47.829 ************************************ 00:06:47.829 10:22:53 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.829 10:22:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.829 10:22:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.829 10:22:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.829 10:22:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.829 10:22:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.829 10:22:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.829 10:22:53 json_config -- paths/export.sh@5 -- # export PATH 00:06:47.829 10:22:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@47 -- # : 0 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.829 10:22:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:47.829 10:22:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:47.830 INFO: JSON configuration test init 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.830 10:22:53 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:47.830 10:22:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:47.830 10:22:53 json_config -- json_config/common.sh@10 -- # shift 00:06:47.830 10:22:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.830 10:22:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.830 10:22:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.830 10:22:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.830 10:22:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.830 10:22:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1733280 00:06:47.830 10:22:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.830 Waiting for target to run... 00:06:47.830 10:22:53 json_config -- json_config/common.sh@25 -- # waitforlisten 1733280 /var/tmp/spdk_tgt.sock 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 1733280 ']' 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.830 10:22:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.830 10:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.830 [2024-07-22 10:22:53.486133] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:47.830 [2024-07-22 10:22:53.486206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733280 ] 00:06:47.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.089 [2024-07-22 10:22:53.759683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.089 [2024-07-22 10:22:53.778101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:48.659 10:22:54 json_config -- json_config/common.sh@26 -- # echo '' 00:06:48.659 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.659 10:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:48.659 10:22:54 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:48.659 10:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:49.229 10:22:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.229 10:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:49.229 10:22:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:49.229 10:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@51 -- # sort 00:06:49.489 10:22:54 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:49.489 10:22:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.489 10:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:49.489 10:22:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.489 10:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:49.489 10:22:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:49.489 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:49.749 MallocForNvmf0 00:06:49.749 10:22:55 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:49.749 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:49.749 MallocForNvmf1 00:06:49.749 10:22:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:49.749 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:50.009 [2024-07-22 10:22:55.495381] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.009 10:22:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.009 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.009 10:22:55 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:50.009 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:50.268 10:22:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:50.268 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:50.268 10:22:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:50.528 10:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:50.528 [2024-07-22 10:22:56.105367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:50.528 10:22:56 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:50.528 10:22:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.528 10:22:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.528 10:22:56 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:50.528 10:22:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.528 10:22:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.528 10:22:56 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:50.528 10:22:56 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:50.528 10:22:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:50.788 MallocBdevForConfigChangeCheck 00:06:50.788 10:22:56 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:50.788 10:22:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.788 10:22:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 10:22:56 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:50.788 10:22:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.048 10:22:56 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:51.048 INFO: shutting down applications... 00:06:51.048 10:22:56 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:51.048 10:22:56 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:51.048 10:22:56 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:51.048 10:22:56 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:51.618 Calling clear_iscsi_subsystem 00:06:51.618 Calling clear_nvmf_subsystem 00:06:51.618 Calling clear_nbd_subsystem 00:06:51.618 Calling clear_ublk_subsystem 00:06:51.618 Calling clear_vhost_blk_subsystem 00:06:51.618 Calling clear_vhost_scsi_subsystem 00:06:51.618 Calling clear_bdev_subsystem 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:51.618 10:22:57 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:51.878 10:22:57 json_config -- json_config/json_config.sh@349 -- # break 00:06:51.878 10:22:57 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:51.878 10:22:57 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:51.878 10:22:57 json_config -- json_config/common.sh@31 -- # local app=target 00:06:51.878 10:22:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:51.878 10:22:57 json_config -- json_config/common.sh@35 -- # [[ -n 1733280 ]] 00:06:51.878 10:22:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1733280 00:06:51.878 10:22:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:51.878 10:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.878 10:22:57 json_config -- json_config/common.sh@41 -- # kill -0 1733280 00:06:51.878 10:22:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.447 10:22:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.447 10:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.447 10:22:57 json_config -- json_config/common.sh@41 -- # kill -0 1733280 00:06:52.447 10:22:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:52.447 10:22:57 json_config -- json_config/common.sh@43 -- # break 00:06:52.447 10:22:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:52.447 10:22:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:52.447 SPDK target shutdown done 00:06:52.447 10:22:57 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:52.447 INFO: relaunching applications... 00:06:52.447 10:22:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.447 10:22:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:52.447 10:22:57 json_config -- json_config/common.sh@10 -- # shift 00:06:52.447 10:22:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:52.447 10:22:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:52.447 10:22:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:52.447 10:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.447 10:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.447 10:22:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1734347 00:06:52.447 10:22:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:52.447 Waiting for target to run... 00:06:52.447 10:22:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1734347 /var/tmp/spdk_tgt.sock 00:06:52.447 10:22:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 1734347 ']' 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.447 10:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.447 [2024-07-22 10:22:57.970285] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:52.447 [2024-07-22 10:22:57.970346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734347 ] 00:06:52.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.707 [2024-07-22 10:22:58.348822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.707 [2024-07-22 10:22:58.367819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.275 [2024-07-22 10:22:58.841356] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.275 [2024-07-22 10:22:58.873703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:53.275 10:22:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.275 10:22:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:53.275 10:22:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:53.275 00:06:53.275 10:22:58 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:53.275 10:22:58 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:53.275 INFO: Checking if target configuration is the same... 00:06:53.275 10:22:58 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.275 10:22:58 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:53.275 10:22:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:53.275 + '[' 2 -ne 2 ']' 00:06:53.275 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:53.275 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:53.275 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:53.275 +++ basename /dev/fd/62 00:06:53.275 ++ mktemp /tmp/62.XXX 00:06:53.275 + tmp_file_1=/tmp/62.VOU 00:06:53.275 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.275 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:53.275 + tmp_file_2=/tmp/spdk_tgt_config.json.nCp 00:06:53.275 + ret=0 00:06:53.275 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.534 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.794 + diff -u /tmp/62.VOU /tmp/spdk_tgt_config.json.nCp 00:06:53.794 + echo 'INFO: JSON config files are the same' 00:06:53.794 INFO: JSON config files are the same 00:06:53.794 + rm /tmp/62.VOU /tmp/spdk_tgt_config.json.nCp 00:06:53.794 + exit 0 00:06:53.794 10:22:59 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:53.794 10:22:59 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:53.794 INFO: changing configuration and checking if this can be detected... 00:06:53.794 10:22:59 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:53.794 10:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:54.055 10:22:59 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.055 10:22:59 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:54.055 10:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:54.055 + '[' 2 -ne 2 ']' 00:06:54.055 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:54.055 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:54.055 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:54.055 +++ basename /dev/fd/62 00:06:54.055 ++ mktemp /tmp/62.XXX 00:06:54.055 + tmp_file_1=/tmp/62.IWP 00:06:54.055 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.055 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:54.055 + tmp_file_2=/tmp/spdk_tgt_config.json.ffG 00:06:54.055 + ret=0 00:06:54.055 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.316 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.316 + diff -u /tmp/62.IWP /tmp/spdk_tgt_config.json.ffG 00:06:54.316 + ret=1 00:06:54.316 + echo '=== Start of file: /tmp/62.IWP ===' 00:06:54.316 + cat /tmp/62.IWP 00:06:54.316 + echo '=== End of file: /tmp/62.IWP ===' 00:06:54.316 + echo '' 00:06:54.316 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ffG ===' 00:06:54.316 + cat /tmp/spdk_tgt_config.json.ffG 00:06:54.316 + echo '=== End of file: /tmp/spdk_tgt_config.json.ffG ===' 00:06:54.316 + echo '' 00:06:54.316 + rm /tmp/62.IWP /tmp/spdk_tgt_config.json.ffG 00:06:54.316 + exit 1 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:54.316 INFO: configuration change detected. 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@321 -- # [[ -n 1734347 ]] 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.316 10:22:59 json_config -- json_config/json_config.sh@327 -- # killprocess 1734347 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 1734347 ']' 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@952 -- # kill -0 1734347 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@953 -- # uname 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1734347 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1734347' 00:06:54.316 killing process with pid 1734347 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@967 -- # kill 1734347 00:06:54.316 10:22:59 json_config -- common/autotest_common.sh@972 -- # wait 1734347 00:06:54.576 10:23:00 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.576 10:23:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:54.576 10:23:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.576 10:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 10:23:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:54.838 10:23:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:54.838 INFO: Success 00:06:54.838 00:06:54.838 real 0m6.977s 00:06:54.838 user 0m8.336s 00:06:54.838 sys 0m1.843s 00:06:54.838 10:23:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.838 10:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 ************************************ 00:06:54.838 END TEST json_config 00:06:54.838 ************************************ 00:06:54.838 10:23:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.838 10:23:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:54.838 10:23:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.838 10:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.838 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 ************************************ 00:06:54.838 START TEST json_config_extra_key 00:06:54.838 ************************************ 00:06:54.838 10:23:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.838 10:23:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.838 10:23:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.838 10:23:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.838 10:23:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.838 10:23:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.838 10:23:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.838 10:23:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:54.838 10:23:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.838 10:23:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:54.838 INFO: launching applications... 00:06:54.838 10:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1735028 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:54.838 Waiting for target to run... 00:06:54.838 10:23:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1735028 /var/tmp/spdk_tgt.sock 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1735028 ']' 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.839 10:23:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:54.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.839 10:23:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:54.839 [2024-07-22 10:23:00.532774] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:54.839 [2024-07-22 10:23:00.532849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735028 ] 00:06:55.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.358 [2024-07-22 10:23:00.950322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.359 [2024-07-22 10:23:00.977113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.619 10:23:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.619 10:23:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:55.619 00:06:55.619 10:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:55.619 INFO: shutting down applications... 00:06:55.619 10:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1735028 ]] 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1735028 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1735028 00:06:55.619 10:23:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1735028 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:56.190 10:23:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:56.190 SPDK target shutdown done 00:06:56.190 10:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:56.190 Success 00:06:56.190 00:06:56.190 real 0m1.452s 00:06:56.190 user 0m0.953s 00:06:56.190 sys 0m0.525s 00:06:56.190 10:23:01 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.190 10:23:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:56.190 ************************************ 00:06:56.190 END TEST json_config_extra_key 00:06:56.190 ************************************ 00:06:56.190 10:23:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.190 10:23:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.190 10:23:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.190 10:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.190 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.190 ************************************ 00:06:56.190 START TEST alias_rpc 00:06:56.190 ************************************ 00:06:56.190 10:23:01 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.450 * Looking for test storage... 00:06:56.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:56.450 10:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.450 10:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1735325 00:06:56.450 10:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1735325 00:06:56.450 10:23:01 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1735325 ']' 00:06:56.450 10:23:01 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.450 10:23:01 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.451 10:23:01 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.451 10:23:01 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.451 10:23:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 10:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.451 [2024-07-22 10:23:02.025710] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:56.451 [2024-07-22 10:23:02.025768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735325 ] 00:06:56.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.451 [2024-07-22 10:23:02.093637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.451 [2024-07-22 10:23:02.129197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.453 10:23:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:57.453 10:23:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1735325 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1735325 ']' 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1735325 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.453 10:23:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1735325 00:06:57.453 10:23:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.453 10:23:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.453 10:23:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1735325' 00:06:57.453 killing process with pid 1735325 00:06:57.453 10:23:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 1735325 00:06:57.453 10:23:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 1735325 00:06:57.713 00:06:57.713 real 0m1.322s 00:06:57.713 user 0m1.430s 00:06:57.713 sys 0m0.371s 00:06:57.713 10:23:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.713 10:23:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.713 ************************************ 00:06:57.714 END TEST alias_rpc 00:06:57.714 ************************************ 00:06:57.714 10:23:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.714 10:23:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:57.714 10:23:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.714 10:23:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.714 10:23:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.714 10:23:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 ************************************ 00:06:57.714 START TEST spdkcli_tcp 00:06:57.714 ************************************ 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.714 * Looking for test storage... 00:06:57.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1735590 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1735590 00:06:57.714 10:23:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1735590 ']' 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.714 10:23:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.975 [2024-07-22 10:23:03.440692] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:57.975 [2024-07-22 10:23:03.440750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735590 ] 00:06:57.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.976 [2024-07-22 10:23:03.505588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.976 [2024-07-22 10:23:03.538326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.976 [2024-07-22 10:23:03.538329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.547 10:23:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.547 10:23:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:58.547 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1735909 00:06:58.548 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:58.548 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.809 [ 00:06:58.809 "bdev_malloc_delete", 00:06:58.809 "bdev_malloc_create", 00:06:58.809 "bdev_null_resize", 00:06:58.809 "bdev_null_delete", 00:06:58.809 "bdev_null_create", 00:06:58.809 "bdev_nvme_cuse_unregister", 00:06:58.809 "bdev_nvme_cuse_register", 00:06:58.809 "bdev_opal_new_user", 00:06:58.809 "bdev_opal_set_lock_state", 00:06:58.809 "bdev_opal_delete", 00:06:58.809 "bdev_opal_get_info", 00:06:58.809 "bdev_opal_create", 00:06:58.810 "bdev_nvme_opal_revert", 00:06:58.810 "bdev_nvme_opal_init", 00:06:58.810 "bdev_nvme_send_cmd", 00:06:58.810 "bdev_nvme_get_path_iostat", 00:06:58.810 "bdev_nvme_get_mdns_discovery_info", 00:06:58.810 "bdev_nvme_stop_mdns_discovery", 00:06:58.810 "bdev_nvme_start_mdns_discovery", 00:06:58.810 "bdev_nvme_set_multipath_policy", 00:06:58.810 "bdev_nvme_set_preferred_path", 00:06:58.810 "bdev_nvme_get_io_paths", 00:06:58.810 "bdev_nvme_remove_error_injection", 00:06:58.810 "bdev_nvme_add_error_injection", 00:06:58.810 "bdev_nvme_get_discovery_info", 00:06:58.810 "bdev_nvme_stop_discovery", 00:06:58.810 "bdev_nvme_start_discovery", 00:06:58.810 "bdev_nvme_get_controller_health_info", 00:06:58.810 "bdev_nvme_disable_controller", 00:06:58.810 "bdev_nvme_enable_controller", 00:06:58.810 "bdev_nvme_reset_controller", 00:06:58.810 "bdev_nvme_get_transport_statistics", 00:06:58.810 "bdev_nvme_apply_firmware", 00:06:58.810 "bdev_nvme_detach_controller", 00:06:58.810 "bdev_nvme_get_controllers", 00:06:58.810 "bdev_nvme_attach_controller", 00:06:58.810 "bdev_nvme_set_hotplug", 00:06:58.810 "bdev_nvme_set_options", 00:06:58.810 "bdev_passthru_delete", 00:06:58.810 "bdev_passthru_create", 00:06:58.810 "bdev_lvol_set_parent_bdev", 00:06:58.810 "bdev_lvol_set_parent", 00:06:58.810 "bdev_lvol_check_shallow_copy", 00:06:58.810 "bdev_lvol_start_shallow_copy", 00:06:58.810 "bdev_lvol_grow_lvstore", 00:06:58.810 "bdev_lvol_get_lvols", 00:06:58.810 "bdev_lvol_get_lvstores", 00:06:58.810 "bdev_lvol_delete", 00:06:58.810 "bdev_lvol_set_read_only", 00:06:58.810 "bdev_lvol_resize", 00:06:58.810 "bdev_lvol_decouple_parent", 00:06:58.810 "bdev_lvol_inflate", 00:06:58.810 "bdev_lvol_rename", 00:06:58.810 "bdev_lvol_clone_bdev", 00:06:58.810 "bdev_lvol_clone", 00:06:58.810 "bdev_lvol_snapshot", 00:06:58.810 "bdev_lvol_create", 00:06:58.810 "bdev_lvol_delete_lvstore", 00:06:58.810 "bdev_lvol_rename_lvstore", 00:06:58.810 "bdev_lvol_create_lvstore", 00:06:58.810 "bdev_raid_set_options", 00:06:58.810 "bdev_raid_remove_base_bdev", 00:06:58.810 "bdev_raid_add_base_bdev", 00:06:58.810 "bdev_raid_delete", 00:06:58.810 "bdev_raid_create", 00:06:58.810 "bdev_raid_get_bdevs", 00:06:58.810 "bdev_error_inject_error", 00:06:58.810 "bdev_error_delete", 00:06:58.810 "bdev_error_create", 00:06:58.810 "bdev_split_delete", 00:06:58.810 "bdev_split_create", 00:06:58.810 "bdev_delay_delete", 00:06:58.810 "bdev_delay_create", 00:06:58.810 "bdev_delay_update_latency", 00:06:58.810 "bdev_zone_block_delete", 00:06:58.810 "bdev_zone_block_create", 00:06:58.810 "blobfs_create", 00:06:58.810 "blobfs_detect", 00:06:58.810 "blobfs_set_cache_size", 00:06:58.810 "bdev_aio_delete", 00:06:58.810 "bdev_aio_rescan", 00:06:58.810 "bdev_aio_create", 00:06:58.810 "bdev_ftl_set_property", 00:06:58.810 "bdev_ftl_get_properties", 00:06:58.810 "bdev_ftl_get_stats", 00:06:58.810 "bdev_ftl_unmap", 00:06:58.810 "bdev_ftl_unload", 00:06:58.810 "bdev_ftl_delete", 00:06:58.810 "bdev_ftl_load", 00:06:58.810 "bdev_ftl_create", 00:06:58.810 "bdev_virtio_attach_controller", 00:06:58.810 "bdev_virtio_scsi_get_devices", 00:06:58.810 "bdev_virtio_detach_controller", 00:06:58.810 "bdev_virtio_blk_set_hotplug", 00:06:58.810 "bdev_iscsi_delete", 00:06:58.810 "bdev_iscsi_create", 00:06:58.810 "bdev_iscsi_set_options", 00:06:58.810 "accel_error_inject_error", 00:06:58.810 "ioat_scan_accel_module", 00:06:58.810 "dsa_scan_accel_module", 00:06:58.810 "iaa_scan_accel_module", 00:06:58.810 "vfu_virtio_create_scsi_endpoint", 00:06:58.810 "vfu_virtio_scsi_remove_target", 00:06:58.810 "vfu_virtio_scsi_add_target", 00:06:58.810 "vfu_virtio_create_blk_endpoint", 00:06:58.810 "vfu_virtio_delete_endpoint", 00:06:58.810 "keyring_file_remove_key", 00:06:58.810 "keyring_file_add_key", 00:06:58.810 "keyring_linux_set_options", 00:06:58.810 "iscsi_get_histogram", 00:06:58.810 "iscsi_enable_histogram", 00:06:58.810 "iscsi_set_options", 00:06:58.810 "iscsi_get_auth_groups", 00:06:58.810 "iscsi_auth_group_remove_secret", 00:06:58.810 "iscsi_auth_group_add_secret", 00:06:58.810 "iscsi_delete_auth_group", 00:06:58.810 "iscsi_create_auth_group", 00:06:58.810 "iscsi_set_discovery_auth", 00:06:58.810 "iscsi_get_options", 00:06:58.810 "iscsi_target_node_request_logout", 00:06:58.810 "iscsi_target_node_set_redirect", 00:06:58.810 "iscsi_target_node_set_auth", 00:06:58.810 "iscsi_target_node_add_lun", 00:06:58.810 "iscsi_get_stats", 00:06:58.810 "iscsi_get_connections", 00:06:58.810 "iscsi_portal_group_set_auth", 00:06:58.810 "iscsi_start_portal_group", 00:06:58.810 "iscsi_delete_portal_group", 00:06:58.810 "iscsi_create_portal_group", 00:06:58.810 "iscsi_get_portal_groups", 00:06:58.810 "iscsi_delete_target_node", 00:06:58.810 "iscsi_target_node_remove_pg_ig_maps", 00:06:58.810 "iscsi_target_node_add_pg_ig_maps", 00:06:58.810 "iscsi_create_target_node", 00:06:58.810 "iscsi_get_target_nodes", 00:06:58.810 "iscsi_delete_initiator_group", 00:06:58.810 "iscsi_initiator_group_remove_initiators", 00:06:58.810 "iscsi_initiator_group_add_initiators", 00:06:58.810 "iscsi_create_initiator_group", 00:06:58.810 "iscsi_get_initiator_groups", 00:06:58.810 "nvmf_set_crdt", 00:06:58.810 "nvmf_set_config", 00:06:58.810 "nvmf_set_max_subsystems", 00:06:58.810 "nvmf_stop_mdns_prr", 00:06:58.810 "nvmf_publish_mdns_prr", 00:06:58.810 "nvmf_subsystem_get_listeners", 00:06:58.810 "nvmf_subsystem_get_qpairs", 00:06:58.810 "nvmf_subsystem_get_controllers", 00:06:58.810 "nvmf_get_stats", 00:06:58.810 "nvmf_get_transports", 00:06:58.810 "nvmf_create_transport", 00:06:58.810 "nvmf_get_targets", 00:06:58.810 "nvmf_delete_target", 00:06:58.810 "nvmf_create_target", 00:06:58.810 "nvmf_subsystem_allow_any_host", 00:06:58.810 "nvmf_subsystem_remove_host", 00:06:58.810 "nvmf_subsystem_add_host", 00:06:58.810 "nvmf_ns_remove_host", 00:06:58.810 "nvmf_ns_add_host", 00:06:58.810 "nvmf_subsystem_remove_ns", 00:06:58.810 "nvmf_subsystem_add_ns", 00:06:58.810 "nvmf_subsystem_listener_set_ana_state", 00:06:58.810 "nvmf_discovery_get_referrals", 00:06:58.810 "nvmf_discovery_remove_referral", 00:06:58.810 "nvmf_discovery_add_referral", 00:06:58.810 "nvmf_subsystem_remove_listener", 00:06:58.810 "nvmf_subsystem_add_listener", 00:06:58.810 "nvmf_delete_subsystem", 00:06:58.810 "nvmf_create_subsystem", 00:06:58.810 "nvmf_get_subsystems", 00:06:58.810 "env_dpdk_get_mem_stats", 00:06:58.810 "nbd_get_disks", 00:06:58.810 "nbd_stop_disk", 00:06:58.810 "nbd_start_disk", 00:06:58.810 "ublk_recover_disk", 00:06:58.810 "ublk_get_disks", 00:06:58.810 "ublk_stop_disk", 00:06:58.810 "ublk_start_disk", 00:06:58.810 "ublk_destroy_target", 00:06:58.810 "ublk_create_target", 00:06:58.810 "virtio_blk_create_transport", 00:06:58.810 "virtio_blk_get_transports", 00:06:58.810 "vhost_controller_set_coalescing", 00:06:58.810 "vhost_get_controllers", 00:06:58.810 "vhost_delete_controller", 00:06:58.810 "vhost_create_blk_controller", 00:06:58.810 "vhost_scsi_controller_remove_target", 00:06:58.810 "vhost_scsi_controller_add_target", 00:06:58.810 "vhost_start_scsi_controller", 00:06:58.810 "vhost_create_scsi_controller", 00:06:58.810 "thread_set_cpumask", 00:06:58.810 "framework_get_governor", 00:06:58.810 "framework_get_scheduler", 00:06:58.810 "framework_set_scheduler", 00:06:58.810 "framework_get_reactors", 00:06:58.810 "thread_get_io_channels", 00:06:58.810 "thread_get_pollers", 00:06:58.810 "thread_get_stats", 00:06:58.810 "framework_monitor_context_switch", 00:06:58.810 "spdk_kill_instance", 00:06:58.810 "log_enable_timestamps", 00:06:58.810 "log_get_flags", 00:06:58.810 "log_clear_flag", 00:06:58.810 "log_set_flag", 00:06:58.810 "log_get_level", 00:06:58.810 "log_set_level", 00:06:58.810 "log_get_print_level", 00:06:58.810 "log_set_print_level", 00:06:58.810 "framework_enable_cpumask_locks", 00:06:58.810 "framework_disable_cpumask_locks", 00:06:58.810 "framework_wait_init", 00:06:58.810 "framework_start_init", 00:06:58.810 "scsi_get_devices", 00:06:58.810 "bdev_get_histogram", 00:06:58.810 "bdev_enable_histogram", 00:06:58.810 "bdev_set_qos_limit", 00:06:58.810 "bdev_set_qd_sampling_period", 00:06:58.810 "bdev_get_bdevs", 00:06:58.810 "bdev_reset_iostat", 00:06:58.810 "bdev_get_iostat", 00:06:58.810 "bdev_examine", 00:06:58.810 "bdev_wait_for_examine", 00:06:58.810 "bdev_set_options", 00:06:58.810 "notify_get_notifications", 00:06:58.810 "notify_get_types", 00:06:58.810 "accel_get_stats", 00:06:58.810 "accel_set_options", 00:06:58.810 "accel_set_driver", 00:06:58.810 "accel_crypto_key_destroy", 00:06:58.810 "accel_crypto_keys_get", 00:06:58.810 "accel_crypto_key_create", 00:06:58.810 "accel_assign_opc", 00:06:58.810 "accel_get_module_info", 00:06:58.810 "accel_get_opc_assignments", 00:06:58.810 "vmd_rescan", 00:06:58.810 "vmd_remove_device", 00:06:58.810 "vmd_enable", 00:06:58.810 "sock_get_default_impl", 00:06:58.810 "sock_set_default_impl", 00:06:58.810 "sock_impl_set_options", 00:06:58.810 "sock_impl_get_options", 00:06:58.810 "iobuf_get_stats", 00:06:58.810 "iobuf_set_options", 00:06:58.810 "keyring_get_keys", 00:06:58.810 "framework_get_pci_devices", 00:06:58.810 "framework_get_config", 00:06:58.810 "framework_get_subsystems", 00:06:58.810 "vfu_tgt_set_base_path", 00:06:58.810 "trace_get_info", 00:06:58.810 "trace_get_tpoint_group_mask", 00:06:58.810 "trace_disable_tpoint_group", 00:06:58.810 "trace_enable_tpoint_group", 00:06:58.810 "trace_clear_tpoint_mask", 00:06:58.810 "trace_set_tpoint_mask", 00:06:58.810 "spdk_get_version", 00:06:58.810 "rpc_get_methods" 00:06:58.810 ] 00:06:58.810 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:58.810 10:23:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.810 10:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.810 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:58.811 10:23:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1735590 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1735590 ']' 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1735590 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1735590 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1735590' 00:06:58.811 killing process with pid 1735590 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1735590 00:06:58.811 10:23:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1735590 00:06:59.072 00:06:59.072 real 0m1.347s 00:06:59.072 user 0m2.495s 00:06:59.072 sys 0m0.409s 00:06:59.072 10:23:04 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.072 10:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.072 ************************************ 00:06:59.072 END TEST spdkcli_tcp 00:06:59.072 ************************************ 00:06:59.072 10:23:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.072 10:23:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:59.072 10:23:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.072 10:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.072 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:06:59.072 ************************************ 00:06:59.072 START TEST dpdk_mem_utility 00:06:59.072 ************************************ 00:06:59.072 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:59.333 * Looking for test storage... 00:06:59.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:59.333 10:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:59.333 10:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1735984 00:06:59.333 10:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1735984 00:06:59.333 10:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1735984 ']' 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.333 10:23:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.333 [2024-07-22 10:23:04.862004] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:06:59.333 [2024-07-22 10:23:04.862069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735984 ] 00:06:59.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.333 [2024-07-22 10:23:04.931105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.333 [2024-07-22 10:23:04.969010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.276 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.276 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:00.276 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:00.276 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:00.276 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.276 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.276 { 00:07:00.276 "filename": "/tmp/spdk_mem_dump.txt" 00:07:00.276 } 00:07:00.276 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.276 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:00.276 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:00.276 1 heaps totaling size 814.000000 MiB 00:07:00.276 size: 814.000000 MiB heap id: 0 00:07:00.276 end heaps---------- 00:07:00.276 8 mempools totaling size 598.116089 MiB 00:07:00.276 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:00.276 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:00.276 size: 84.521057 MiB name: bdev_io_1735984 00:07:00.276 size: 51.011292 MiB name: evtpool_1735984 00:07:00.276 size: 50.003479 MiB name: msgpool_1735984 00:07:00.276 size: 21.763794 MiB name: PDU_Pool 00:07:00.276 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:00.276 size: 0.026123 MiB name: Session_Pool 00:07:00.276 end mempools------- 00:07:00.276 6 memzones totaling size 4.142822 MiB 00:07:00.276 size: 1.000366 MiB name: RG_ring_0_1735984 00:07:00.276 size: 1.000366 MiB name: RG_ring_1_1735984 00:07:00.276 size: 1.000366 MiB name: RG_ring_4_1735984 00:07:00.276 size: 1.000366 MiB name: RG_ring_5_1735984 00:07:00.276 size: 0.125366 MiB name: RG_ring_2_1735984 00:07:00.276 size: 0.015991 MiB name: RG_ring_3_1735984 00:07:00.276 end memzones------- 00:07:00.276 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:00.276 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:00.276 list of free elements. size: 12.519348 MiB 00:07:00.276 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:00.276 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:00.276 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:00.276 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:00.276 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:00.276 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:00.276 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:00.276 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:00.276 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:00.276 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:00.276 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:00.276 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:00.276 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:00.276 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:00.276 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:00.276 list of standard malloc elements. size: 199.218079 MiB 00:07:00.276 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:00.276 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:00.276 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:00.276 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:00.276 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:00.276 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:00.276 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:00.276 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:00.276 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:00.276 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:00.276 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:00.276 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:00.276 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:00.277 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:00.277 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:00.277 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:00.277 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:00.277 list of memzone associated elements. size: 602.262573 MiB 00:07:00.277 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:00.277 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:00.277 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:00.277 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:00.277 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:00.277 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1735984_0 00:07:00.277 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:00.277 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1735984_0 00:07:00.277 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:00.277 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1735984_0 00:07:00.277 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:00.277 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:00.277 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:00.277 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:00.277 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:00.277 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1735984 00:07:00.277 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:00.277 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1735984 00:07:00.277 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:00.277 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1735984 00:07:00.277 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:00.277 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:00.277 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:00.277 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:00.277 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:00.277 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:00.277 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:00.277 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:00.277 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:00.277 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1735984 00:07:00.277 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:00.277 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1735984 00:07:00.277 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:00.277 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1735984 00:07:00.277 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:00.277 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1735984 00:07:00.277 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:00.277 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1735984 00:07:00.277 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:00.277 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:00.277 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:00.277 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:00.277 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:00.277 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:00.277 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:00.277 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1735984 00:07:00.277 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:00.277 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:00.277 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:00.277 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:00.277 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:00.277 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1735984 00:07:00.277 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:00.277 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:00.277 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:00.277 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1735984 00:07:00.277 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:00.277 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1735984 00:07:00.277 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:00.277 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:00.277 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:00.277 10:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1735984 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1735984 ']' 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1735984 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1735984 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1735984' 00:07:00.277 killing process with pid 1735984 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1735984 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1735984 00:07:00.277 00:07:00.277 real 0m1.262s 00:07:00.277 user 0m1.296s 00:07:00.277 sys 0m0.399s 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.277 10:23:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.277 ************************************ 00:07:00.277 END TEST dpdk_mem_utility 00:07:00.277 ************************************ 00:07:00.538 10:23:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.538 10:23:05 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.538 10:23:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.538 10:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.538 10:23:05 -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 START TEST event 00:07:00.538 ************************************ 00:07:00.538 10:23:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.538 * Looking for test storage... 00:07:00.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:00.538 10:23:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:00.538 10:23:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.538 10:23:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.538 10:23:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:00.538 10:23:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.538 10:23:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 START TEST event_perf 00:07:00.538 ************************************ 00:07:00.538 10:23:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.538 Running I/O for 1 seconds...[2024-07-22 10:23:06.195977] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:00.538 [2024-07-22 10:23:06.196067] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736374 ] 00:07:00.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.799 [2024-07-22 10:23:06.270869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.800 [2024-07-22 10:23:06.310685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.800 [2024-07-22 10:23:06.310804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.800 [2024-07-22 10:23:06.310963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.800 Running I/O for 1 seconds...[2024-07-22 10:23:06.310962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.742 00:07:01.742 lcore 0: 180587 00:07:01.742 lcore 1: 180583 00:07:01.742 lcore 2: 180582 00:07:01.742 lcore 3: 180585 00:07:01.742 done. 00:07:01.742 00:07:01.742 real 0m1.176s 00:07:01.742 user 0m4.085s 00:07:01.742 sys 0m0.087s 00:07:01.742 10:23:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.742 10:23:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.742 ************************************ 00:07:01.742 END TEST event_perf 00:07:01.742 ************************************ 00:07:01.742 10:23:07 event -- common/autotest_common.sh@1142 -- # return 0 00:07:01.742 10:23:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.742 10:23:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:01.742 10:23:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.742 10:23:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.742 ************************************ 00:07:01.742 START TEST event_reactor 00:07:01.742 ************************************ 00:07:01.742 10:23:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:02.002 [2024-07-22 10:23:07.449339] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:02.002 [2024-07-22 10:23:07.449489] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736727 ] 00:07:02.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.002 [2024-07-22 10:23:07.518792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.002 [2024-07-22 10:23:07.548694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.940 test_start 00:07:02.940 oneshot 00:07:02.940 tick 100 00:07:02.940 tick 100 00:07:02.940 tick 250 00:07:02.940 tick 100 00:07:02.940 tick 100 00:07:02.940 tick 250 00:07:02.940 tick 100 00:07:02.940 tick 500 00:07:02.940 tick 100 00:07:02.940 tick 100 00:07:02.940 tick 250 00:07:02.940 tick 100 00:07:02.940 tick 100 00:07:02.940 test_end 00:07:02.940 00:07:02.940 real 0m1.158s 00:07:02.940 user 0m1.076s 00:07:02.940 sys 0m0.078s 00:07:02.940 10:23:08 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.940 10:23:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.940 ************************************ 00:07:02.940 END TEST event_reactor 00:07:02.940 ************************************ 00:07:02.940 10:23:08 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.940 10:23:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.940 10:23:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:02.940 10:23:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.940 10:23:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.200 ************************************ 00:07:03.200 START TEST event_reactor_perf 00:07:03.200 ************************************ 00:07:03.200 10:23:08 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.200 [2024-07-22 10:23:08.682043] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:03.200 [2024-07-22 10:23:08.682124] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736874 ] 00:07:03.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.200 [2024-07-22 10:23:08.753574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.200 [2024-07-22 10:23:08.788430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.142 test_start 00:07:04.142 test_end 00:07:04.142 Performance: 369963 events per second 00:07:04.142 00:07:04.142 real 0m1.166s 00:07:04.142 user 0m1.089s 00:07:04.142 sys 0m0.073s 00:07:04.142 10:23:09 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.142 10:23:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.142 ************************************ 00:07:04.142 END TEST event_reactor_perf 00:07:04.142 ************************************ 00:07:04.402 10:23:09 event -- common/autotest_common.sh@1142 -- # return 0 00:07:04.402 10:23:09 event -- event/event.sh@49 -- # uname -s 00:07:04.402 10:23:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.402 10:23:09 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.402 10:23:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.402 10:23:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.402 10:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.402 ************************************ 00:07:04.403 START TEST event_scheduler 00:07:04.403 ************************************ 00:07:04.403 10:23:09 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.403 * Looking for test storage... 00:07:04.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:04.403 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.403 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1737144 00:07:04.403 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.403 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.403 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1737144 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1737144 ']' 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.403 10:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.403 [2024-07-22 10:23:10.069283] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:04.403 [2024-07-22 10:23:10.069347] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737144 ] 00:07:04.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.662 [2024-07-22 10:23:10.133140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.662 [2024-07-22 10:23:10.173516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.662 [2024-07-22 10:23:10.173678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.662 [2024-07-22 10:23:10.173833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.662 [2024-07-22 10:23:10.173834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:05.233 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.233 [2024-07-22 10:23:10.847950] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:05.233 [2024-07-22 10:23:10.847963] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.233 [2024-07-22 10:23:10.847970] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.233 [2024-07-22 10:23:10.847974] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.233 [2024-07-22 10:23:10.847977] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.233 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.233 [2024-07-22 10:23:10.897148] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.233 10:23:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.233 10:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 ************************************ 00:07:05.493 START TEST scheduler_create_thread 00:07:05.493 ************************************ 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 2 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 3 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 4 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 5 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 6 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.493 10:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.493 7 00:07:05.493 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.493 10:23:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.493 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.494 8 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.494 9 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.494 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.065 10 00:07:06.065 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.065 10:23:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:06.065 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.065 10:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.450 10:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.450 10:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.450 10:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.450 10:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.450 10:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.019 10:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.019 10:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:08.019 10:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.019 10:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.960 10:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.960 10:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:08.960 10:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:08.960 10:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.960 10:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.529 10:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.529 00:07:09.529 real 0m4.222s 00:07:09.529 user 0m0.024s 00:07:09.529 sys 0m0.007s 00:07:09.529 10:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.529 10:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.529 ************************************ 00:07:09.529 END TEST scheduler_create_thread 00:07:09.529 ************************************ 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:09.529 10:23:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:09.529 10:23:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1737144 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1737144 ']' 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1737144 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.529 10:23:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737144 00:07:09.817 10:23:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:09.817 10:23:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:09.817 10:23:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737144' 00:07:09.817 killing process with pid 1737144 00:07:09.817 10:23:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1737144 00:07:09.817 10:23:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1737144 00:07:10.131 [2024-07-22 10:23:15.538520] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.131 00:07:10.131 real 0m5.791s 00:07:10.131 user 0m13.697s 00:07:10.131 sys 0m0.370s 00:07:10.131 10:23:15 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.131 10:23:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 ************************************ 00:07:10.132 END TEST event_scheduler 00:07:10.132 ************************************ 00:07:10.132 10:23:15 event -- common/autotest_common.sh@1142 -- # return 0 00:07:10.132 10:23:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.132 10:23:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.132 10:23:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.132 10:23:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.132 10:23:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 ************************************ 00:07:10.132 START TEST app_repeat 00:07:10.132 ************************************ 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1738527 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1738527' 00:07:10.132 Process app_repeat pid: 1738527 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.132 spdk_app_start Round 0 00:07:10.132 10:23:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738527 /var/tmp/spdk-nbd.sock 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1738527 ']' 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.132 10:23:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-07-22 10:23:15.818719] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:10.132 [2024-07-22 10:23:15.818784] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738527 ] 00:07:10.397 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.397 [2024-07-22 10:23:15.886523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.397 [2024-07-22 10:23:15.922073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.397 [2024-07-22 10:23:15.922075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.397 10:23:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.397 10:23:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:10.397 10:23:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.658 Malloc0 00:07:10.658 10:23:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.658 Malloc1 00:07:10.658 10:23:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.658 10:23:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.659 10:23:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.920 /dev/nbd0 00:07:10.920 10:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.920 10:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.920 1+0 records in 00:07:10.920 1+0 records out 00:07:10.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280667 s, 14.6 MB/s 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:10.920 10:23:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:10.920 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.920 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.920 10:23:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.182 /dev/nbd1 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.182 1+0 records in 00:07:11.182 1+0 records out 00:07:11.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276186 s, 14.8 MB/s 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.182 10:23:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.182 { 00:07:11.182 "nbd_device": "/dev/nbd0", 00:07:11.182 "bdev_name": "Malloc0" 00:07:11.182 }, 00:07:11.182 { 00:07:11.182 "nbd_device": "/dev/nbd1", 00:07:11.182 "bdev_name": "Malloc1" 00:07:11.182 } 00:07:11.182 ]' 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.182 { 00:07:11.182 "nbd_device": "/dev/nbd0", 00:07:11.182 "bdev_name": "Malloc0" 00:07:11.182 }, 00:07:11.182 { 00:07:11.182 "nbd_device": "/dev/nbd1", 00:07:11.182 "bdev_name": "Malloc1" 00:07:11.182 } 00:07:11.182 ]' 00:07:11.182 10:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.442 /dev/nbd1' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.442 /dev/nbd1' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.442 256+0 records in 00:07:11.442 256+0 records out 00:07:11.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118317 s, 88.6 MB/s 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.442 256+0 records in 00:07:11.442 256+0 records out 00:07:11.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151049 s, 69.4 MB/s 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.442 256+0 records in 00:07:11.442 256+0 records out 00:07:11.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168921 s, 62.1 MB/s 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:23:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.702 10:23:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.963 10:23:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.963 10:23:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.224 10:23:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.224 [2024-07-22 10:23:17.816915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.224 [2024-07-22 10:23:17.846841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.224 [2024-07-22 10:23:17.846844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.224 [2024-07-22 10:23:17.878112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.224 [2024-07-22 10:23:17.878145] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:15.525 10:23:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:15.525 10:23:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:15.525 spdk_app_start Round 1 00:07:15.525 10:23:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738527 /var/tmp/spdk-nbd.sock 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1738527 ']' 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.525 10:23:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:15.525 10:23:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.525 Malloc0 00:07:15.525 10:23:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.525 Malloc1 00:07:15.525 10:23:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.525 10:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.785 /dev/nbd0 00:07:15.785 10:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.785 10:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.785 1+0 records in 00:07:15.785 1+0 records out 00:07:15.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240157 s, 17.1 MB/s 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:15.785 10:23:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:15.785 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.785 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.785 10:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.045 /dev/nbd1 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.045 1+0 records in 00:07:16.045 1+0 records out 00:07:16.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241559 s, 17.0 MB/s 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:16.045 10:23:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.045 { 00:07:16.045 "nbd_device": "/dev/nbd0", 00:07:16.045 "bdev_name": "Malloc0" 00:07:16.045 }, 00:07:16.045 { 00:07:16.045 "nbd_device": "/dev/nbd1", 00:07:16.045 "bdev_name": "Malloc1" 00:07:16.045 } 00:07:16.045 ]' 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.045 { 00:07:16.045 "nbd_device": "/dev/nbd0", 00:07:16.045 "bdev_name": "Malloc0" 00:07:16.045 }, 00:07:16.045 { 00:07:16.045 "nbd_device": "/dev/nbd1", 00:07:16.045 "bdev_name": "Malloc1" 00:07:16.045 } 00:07:16.045 ]' 00:07:16.045 10:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.045 /dev/nbd1' 00:07:16.305 10:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.305 /dev/nbd1' 00:07:16.305 10:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.305 10:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.306 256+0 records in 00:07:16.306 256+0 records out 00:07:16.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124522 s, 84.2 MB/s 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.306 256+0 records in 00:07:16.306 256+0 records out 00:07:16.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154117 s, 68.0 MB/s 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.306 256+0 records in 00:07:16.306 256+0 records out 00:07:16.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170687 s, 61.4 MB/s 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.306 10:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.606 10:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.866 10:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.866 10:23:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:16.866 10:23:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.127 [2024-07-22 10:23:22.639529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.127 [2024-07-22 10:23:22.669447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.127 [2024-07-22 10:23:22.669450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.127 [2024-07-22 10:23:22.701444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.127 [2024-07-22 10:23:22.701479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:20.426 spdk_app_start Round 2 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738527 /var/tmp/spdk-nbd.sock 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1738527 ']' 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.426 10:23:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.426 Malloc0 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.426 Malloc1 00:07:20.426 10:23:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.426 10:23:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:20.687 /dev/nbd0 00:07:20.687 10:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.687 10:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.687 1+0 records in 00:07:20.687 1+0 records out 00:07:20.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198286 s, 20.7 MB/s 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.687 10:23:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.688 /dev/nbd1 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.688 1+0 records in 00:07:20.688 1+0 records out 00:07:20.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279028 s, 14.7 MB/s 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.688 10:23:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.688 10:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.949 { 00:07:20.949 "nbd_device": "/dev/nbd0", 00:07:20.949 "bdev_name": "Malloc0" 00:07:20.949 }, 00:07:20.949 { 00:07:20.949 "nbd_device": "/dev/nbd1", 00:07:20.949 "bdev_name": "Malloc1" 00:07:20.949 } 00:07:20.949 ]' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.949 { 00:07:20.949 "nbd_device": "/dev/nbd0", 00:07:20.949 "bdev_name": "Malloc0" 00:07:20.949 }, 00:07:20.949 { 00:07:20.949 "nbd_device": "/dev/nbd1", 00:07:20.949 "bdev_name": "Malloc1" 00:07:20.949 } 00:07:20.949 ]' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.949 /dev/nbd1' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.949 /dev/nbd1' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.949 256+0 records in 00:07:20.949 256+0 records out 00:07:20.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116639 s, 89.9 MB/s 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.949 256+0 records in 00:07:20.949 256+0 records out 00:07:20.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160649 s, 65.3 MB/s 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.949 256+0 records in 00:07:20.949 256+0 records out 00:07:20.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167632 s, 62.6 MB/s 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.949 10:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.209 10:23:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.470 10:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.470 10:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.470 10:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.470 10:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.730 10:23:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.730 10:23:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.730 10:23:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.990 [2024-07-22 10:23:27.491804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.990 [2024-07-22 10:23:27.521767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.990 [2024-07-22 10:23:27.521770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.990 [2024-07-22 10:23:27.553276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.990 [2024-07-22 10:23:27.553311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.284 10:23:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1738527 /var/tmp/spdk-nbd.sock 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1738527 ']' 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:25.284 10:23:30 event.app_repeat -- event/event.sh@39 -- # killprocess 1738527 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1738527 ']' 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1738527 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738527 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738527' 00:07:25.284 killing process with pid 1738527 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1738527 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1738527 00:07:25.284 spdk_app_start is called in Round 0. 00:07:25.284 Shutdown signal received, stop current app iteration 00:07:25.284 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 reinitialization... 00:07:25.284 spdk_app_start is called in Round 1. 00:07:25.284 Shutdown signal received, stop current app iteration 00:07:25.284 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 reinitialization... 00:07:25.284 spdk_app_start is called in Round 2. 00:07:25.284 Shutdown signal received, stop current app iteration 00:07:25.284 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 reinitialization... 00:07:25.284 spdk_app_start is called in Round 3. 00:07:25.284 Shutdown signal received, stop current app iteration 00:07:25.284 10:23:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:25.284 10:23:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:25.284 00:07:25.284 real 0m14.905s 00:07:25.284 user 0m32.414s 00:07:25.284 sys 0m2.087s 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.284 10:23:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.284 ************************************ 00:07:25.284 END TEST app_repeat 00:07:25.284 ************************************ 00:07:25.284 10:23:30 event -- common/autotest_common.sh@1142 -- # return 0 00:07:25.284 10:23:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:25.284 10:23:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:25.284 10:23:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.284 10:23:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.284 10:23:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.284 ************************************ 00:07:25.284 START TEST cpu_locks 00:07:25.284 ************************************ 00:07:25.284 10:23:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:25.284 * Looking for test storage... 00:07:25.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:25.284 10:23:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:25.284 10:23:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:25.284 10:23:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:25.284 10:23:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:25.284 10:23:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.284 10:23:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.284 10:23:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.284 ************************************ 00:07:25.284 START TEST default_locks 00:07:25.284 ************************************ 00:07:25.284 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:25.284 10:23:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1741768 00:07:25.284 10:23:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1741768 00:07:25.284 10:23:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.284 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1741768 ']' 00:07:25.285 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.285 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.285 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.285 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.285 10:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.285 [2024-07-22 10:23:30.966275] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:25.285 [2024-07-22 10:23:30.966335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741768 ] 00:07:25.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.544 [2024-07-22 10:23:31.038888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.544 [2024-07-22 10:23:31.076999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.114 10:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.114 10:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:26.114 10:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1741768 00:07:26.114 10:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1741768 00:07:26.114 10:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.684 lslocks: write error 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1741768 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1741768 ']' 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1741768 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1741768 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1741768' 00:07:26.684 killing process with pid 1741768 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1741768 00:07:26.684 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1741768 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1741768 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1741768 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1741768 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1741768 ']' 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1741768) - No such process 00:07:26.945 ERROR: process (pid: 1741768) is no longer running 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:26.945 00:07:26.945 real 0m1.494s 00:07:26.945 user 0m1.600s 00:07:26.945 sys 0m0.503s 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.945 10:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.945 ************************************ 00:07:26.945 END TEST default_locks 00:07:26.945 ************************************ 00:07:26.945 10:23:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:26.945 10:23:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:26.945 10:23:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.945 10:23:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.945 10:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.945 ************************************ 00:07:26.945 START TEST default_locks_via_rpc 00:07:26.945 ************************************ 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1742120 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1742120 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1742120 ']' 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.945 10:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.945 [2024-07-22 10:23:32.541615] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:26.945 [2024-07-22 10:23:32.541673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742120 ] 00:07:26.945 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.945 [2024-07-22 10:23:32.607060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.945 [2024-07-22 10:23:32.637218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1742120 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1742120 00:07:27.900 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.160 10:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1742120 00:07:28.160 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1742120 ']' 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1742120 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742120 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742120' 00:07:28.161 killing process with pid 1742120 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1742120 00:07:28.161 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1742120 00:07:28.422 00:07:28.422 real 0m1.423s 00:07:28.422 user 0m1.523s 00:07:28.422 sys 0m0.476s 00:07:28.422 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.422 10:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.422 ************************************ 00:07:28.422 END TEST default_locks_via_rpc 00:07:28.422 ************************************ 00:07:28.422 10:23:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:28.422 10:23:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:28.422 10:23:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.422 10:23:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.422 10:23:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.422 ************************************ 00:07:28.422 START TEST non_locking_app_on_locked_coremask 00:07:28.422 ************************************ 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1742378 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1742378 /var/tmp/spdk.sock 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1742378 ']' 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.422 10:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.422 [2024-07-22 10:23:34.031644] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:28.422 [2024-07-22 10:23:34.031696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742378 ] 00:07:28.422 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.422 [2024-07-22 10:23:34.099498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.683 [2024-07-22 10:23:34.138750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1742515 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1742515 /var/tmp/spdk2.sock 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1742515 ']' 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.254 10:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.254 [2024-07-22 10:23:34.841936] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:29.254 [2024-07-22 10:23:34.841988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742515 ] 00:07:29.254 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.254 [2024-07-22 10:23:34.938339] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.254 [2024-07-22 10:23:34.938364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.515 [2024-07-22 10:23:35.001485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.086 10:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.086 10:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:30.086 10:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1742378 00:07:30.086 10:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1742378 00:07:30.086 10:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.655 lslocks: write error 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1742378 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1742378 ']' 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1742378 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742378 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742378' 00:07:30.655 killing process with pid 1742378 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1742378 00:07:30.655 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1742378 00:07:30.915 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1742515 00:07:30.915 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1742515 ']' 00:07:30.915 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1742515 00:07:30.915 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742515 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742515' 00:07:31.174 killing process with pid 1742515 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1742515 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1742515 00:07:31.174 00:07:31.174 real 0m2.882s 00:07:31.174 user 0m3.143s 00:07:31.174 sys 0m0.870s 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.174 10:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.174 ************************************ 00:07:31.174 END TEST non_locking_app_on_locked_coremask 00:07:31.174 ************************************ 00:07:31.434 10:23:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:31.434 10:23:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.434 10:23:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.434 10:23:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.434 10:23:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.434 ************************************ 00:07:31.434 START TEST locking_app_on_unlocked_coremask 00:07:31.434 ************************************ 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1742902 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1742902 /var/tmp/spdk.sock 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1742902 ']' 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.434 10:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.434 [2024-07-22 10:23:36.985410] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:31.434 [2024-07-22 10:23:36.985464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742902 ] 00:07:31.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.434 [2024-07-22 10:23:37.053408] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.434 [2024-07-22 10:23:37.053440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.434 [2024-07-22 10:23:37.090794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1743221 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1743221 /var/tmp/spdk2.sock 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1743221 ']' 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.373 10:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.373 [2024-07-22 10:23:37.769597] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:32.373 [2024-07-22 10:23:37.769636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743221 ] 00:07:32.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.373 [2024-07-22 10:23:37.860536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.373 [2024-07-22 10:23:37.923672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.943 10:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.943 10:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:32.943 10:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1743221 00:07:32.943 10:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1743221 00:07:32.943 10:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.514 lslocks: write error 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1742902 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1742902 ']' 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1742902 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742902 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742902' 00:07:33.514 killing process with pid 1742902 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1742902 00:07:33.514 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1742902 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1743221 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1743221 ']' 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1743221 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743221 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743221' 00:07:34.087 killing process with pid 1743221 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1743221 00:07:34.087 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1743221 00:07:34.348 00:07:34.348 real 0m2.885s 00:07:34.348 user 0m3.112s 00:07:34.348 sys 0m0.862s 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.348 ************************************ 00:07:34.348 END TEST locking_app_on_unlocked_coremask 00:07:34.348 ************************************ 00:07:34.348 10:23:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:34.348 10:23:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:34.348 10:23:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.348 10:23:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.348 10:23:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.348 ************************************ 00:07:34.348 START TEST locking_app_on_locked_coremask 00:07:34.348 ************************************ 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1743598 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1743598 /var/tmp/spdk.sock 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1743598 ']' 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.348 10:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.348 [2024-07-22 10:23:39.945012] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:34.348 [2024-07-22 10:23:39.945062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743598 ] 00:07:34.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.348 [2024-07-22 10:23:40.011712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.608 [2024-07-22 10:23:40.047559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1743791 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1743791 /var/tmp/spdk2.sock 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1743791 /var/tmp/spdk2.sock 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1743791 /var/tmp/spdk2.sock 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1743791 ']' 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.180 10:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.180 [2024-07-22 10:23:40.734918] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:35.181 [2024-07-22 10:23:40.734969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743791 ] 00:07:35.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.181 [2024-07-22 10:23:40.833733] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1743598 has claimed it. 00:07:35.181 [2024-07-22 10:23:40.833773] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1743791) - No such process 00:07:35.754 ERROR: process (pid: 1743791) is no longer running 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1743598 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1743598 00:07:35.754 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.325 lslocks: write error 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1743598 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1743598 ']' 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1743598 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.325 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743598 00:07:36.326 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.326 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.326 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743598' 00:07:36.326 killing process with pid 1743598 00:07:36.326 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1743598 00:07:36.326 10:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1743598 00:07:36.587 00:07:36.587 real 0m2.168s 00:07:36.587 user 0m2.372s 00:07:36.587 sys 0m0.627s 00:07:36.587 10:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.587 10:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 ************************************ 00:07:36.587 END TEST locking_app_on_locked_coremask 00:07:36.587 ************************************ 00:07:36.587 10:23:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:36.587 10:23:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.587 10:23:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.587 10:23:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.587 10:23:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 ************************************ 00:07:36.587 START TEST locking_overlapped_coremask 00:07:36.587 ************************************ 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1744041 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1744041 /var/tmp/spdk.sock 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744041 ']' 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.587 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 [2024-07-22 10:23:42.184823] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:36.587 [2024-07-22 10:23:42.184877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744041 ] 00:07:36.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.588 [2024-07-22 10:23:42.253524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.848 [2024-07-22 10:23:42.293441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.848 [2024-07-22 10:23:42.293518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.848 [2024-07-22 10:23:42.293521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1744305 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1744305 /var/tmp/spdk2.sock 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1744305 /var/tmp/spdk2.sock 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1744305 /var/tmp/spdk2.sock 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744305 ']' 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.418 10:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.418 [2024-07-22 10:23:43.016121] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:37.418 [2024-07-22 10:23:43.016174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744305 ] 00:07:37.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.418 [2024-07-22 10:23:43.097336] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1744041 has claimed it. 00:07:37.418 [2024-07-22 10:23:43.097368] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1744305) - No such process 00:07:37.988 ERROR: process (pid: 1744305) is no longer running 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1744041 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1744041 ']' 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1744041 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744041 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744041' 00:07:37.988 killing process with pid 1744041 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1744041 00:07:37.988 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1744041 00:07:38.248 00:07:38.248 real 0m1.746s 00:07:38.248 user 0m5.009s 00:07:38.248 sys 0m0.385s 00:07:38.248 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.248 10:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.248 ************************************ 00:07:38.248 END TEST locking_overlapped_coremask 00:07:38.248 ************************************ 00:07:38.248 10:23:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:38.248 10:23:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:38.248 10:23:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.248 10:23:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.248 10:23:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 ************************************ 00:07:38.508 START TEST locking_overlapped_coremask_via_rpc 00:07:38.508 ************************************ 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1744484 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1744484 /var/tmp/spdk.sock 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1744484 ']' 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.508 10:23:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 [2024-07-22 10:23:44.006609] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:38.508 [2024-07-22 10:23:44.006667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744484 ] 00:07:38.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.508 [2024-07-22 10:23:44.078842] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.508 [2024-07-22 10:23:44.078881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.508 [2024-07-22 10:23:44.119318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.508 [2024-07-22 10:23:44.119454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.508 [2024-07-22 10:23:44.119720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1744681 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1744681 /var/tmp/spdk2.sock 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1744681 ']' 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.446 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.447 10:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 [2024-07-22 10:23:44.831251] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:39.447 [2024-07-22 10:23:44.831306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744681 ] 00:07:39.447 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.447 [2024-07-22 10:23:44.911259] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.447 [2024-07-22 10:23:44.911284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.447 [2024-07-22 10:23:44.969009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.447 [2024-07-22 10:23:44.972456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.447 [2024-07-22 10:23:44.972458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 [2024-07-22 10:23:45.620456] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1744484 has claimed it. 00:07:40.015 request: 00:07:40.015 { 00:07:40.015 "method": "framework_enable_cpumask_locks", 00:07:40.015 "req_id": 1 00:07:40.015 } 00:07:40.015 Got JSON-RPC error response 00:07:40.015 response: 00:07:40.015 { 00:07:40.015 "code": -32603, 00:07:40.015 "message": "Failed to claim CPU core: 2" 00:07:40.015 } 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1744484 /var/tmp/spdk.sock 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1744484 ']' 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.015 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1744681 /var/tmp/spdk2.sock 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1744681 ']' 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.276 00:07:40.276 real 0m2.020s 00:07:40.276 user 0m0.785s 00:07:40.276 sys 0m0.158s 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.276 10:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.276 ************************************ 00:07:40.276 END TEST locking_overlapped_coremask_via_rpc 00:07:40.276 ************************************ 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:40.536 10:23:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.536 10:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1744484 ]] 00:07:40.536 10:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1744484 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1744484 ']' 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1744484 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744484 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744484' 00:07:40.536 killing process with pid 1744484 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1744484 00:07:40.536 10:23:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1744484 00:07:40.796 10:23:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1744681 ]] 00:07:40.796 10:23:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1744681 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1744681 ']' 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1744681 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744681 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744681' 00:07:40.796 killing process with pid 1744681 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1744681 00:07:40.796 10:23:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1744681 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1744484 ]] 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1744484 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1744484 ']' 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1744484 00:07:41.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1744484) - No such process 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1744484 is not found' 00:07:41.061 Process with pid 1744484 is not found 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1744681 ]] 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1744681 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1744681 ']' 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1744681 00:07:41.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1744681) - No such process 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1744681 is not found' 00:07:41.061 Process with pid 1744681 is not found 00:07:41.061 10:23:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.061 00:07:41.061 real 0m15.737s 00:07:41.061 user 0m27.218s 00:07:41.061 sys 0m4.755s 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.061 10:23:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.061 ************************************ 00:07:41.061 END TEST cpu_locks 00:07:41.061 ************************************ 00:07:41.061 10:23:46 event -- common/autotest_common.sh@1142 -- # return 0 00:07:41.061 00:07:41.061 real 0m40.508s 00:07:41.061 user 1m19.797s 00:07:41.061 sys 0m7.837s 00:07:41.062 10:23:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.062 10:23:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 ************************************ 00:07:41.062 END TEST event 00:07:41.062 ************************************ 00:07:41.062 10:23:46 -- common/autotest_common.sh@1142 -- # return 0 00:07:41.062 10:23:46 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:41.062 10:23:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.062 10:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.062 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 ************************************ 00:07:41.062 START TEST thread 00:07:41.062 ************************************ 00:07:41.062 10:23:46 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:41.062 * Looking for test storage... 00:07:41.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:41.062 10:23:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.062 10:23:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:41.062 10:23:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.062 10:23:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 ************************************ 00:07:41.062 START TEST thread_poller_perf 00:07:41.062 ************************************ 00:07:41.062 10:23:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.424 [2024-07-22 10:23:46.773275] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:41.424 [2024-07-22 10:23:46.773375] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745120 ] 00:07:41.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.424 [2024-07-22 10:23:46.847050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.424 [2024-07-22 10:23:46.887499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.424 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:42.368 ====================================== 00:07:42.368 busy:2413416200 (cyc) 00:07:42.368 total_run_count: 287000 00:07:42.368 tsc_hz: 2400000000 (cyc) 00:07:42.368 ====================================== 00:07:42.368 poller_cost: 8409 (cyc), 3503 (nsec) 00:07:42.368 00:07:42.368 real 0m1.185s 00:07:42.368 user 0m1.095s 00:07:42.368 sys 0m0.085s 00:07:42.368 10:23:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.368 10:23:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.368 ************************************ 00:07:42.368 END TEST thread_poller_perf 00:07:42.368 ************************************ 00:07:42.368 10:23:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:42.368 10:23:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.368 10:23:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:42.368 10:23:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.368 10:23:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.368 ************************************ 00:07:42.368 START TEST thread_poller_perf 00:07:42.368 ************************************ 00:07:42.368 10:23:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.368 [2024-07-22 10:23:48.032412] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:42.368 [2024-07-22 10:23:48.032502] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745474 ] 00:07:42.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.628 [2024-07-22 10:23:48.119475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.628 [2024-07-22 10:23:48.154601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.628 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:43.570 ====================================== 00:07:43.570 busy:2401927494 (cyc) 00:07:43.570 total_run_count: 3806000 00:07:43.570 tsc_hz: 2400000000 (cyc) 00:07:43.570 ====================================== 00:07:43.570 poller_cost: 631 (cyc), 262 (nsec) 00:07:43.570 00:07:43.570 real 0m1.181s 00:07:43.570 user 0m1.091s 00:07:43.570 sys 0m0.085s 00:07:43.570 10:23:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.570 10:23:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.570 ************************************ 00:07:43.570 END TEST thread_poller_perf 00:07:43.570 ************************************ 00:07:43.570 10:23:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:43.570 10:23:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:43.570 00:07:43.570 real 0m2.617s 00:07:43.570 user 0m2.280s 00:07:43.570 sys 0m0.343s 00:07:43.570 10:23:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.570 10:23:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.570 ************************************ 00:07:43.570 END TEST thread 00:07:43.570 ************************************ 00:07:43.830 10:23:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.830 10:23:49 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:43.830 10:23:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.830 10:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.830 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:07:43.830 ************************************ 00:07:43.830 START TEST accel 00:07:43.830 ************************************ 00:07:43.830 10:23:49 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:43.830 * Looking for test storage... 00:07:43.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:43.830 10:23:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:43.830 10:23:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:43.830 10:23:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:43.831 10:23:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1745861 00:07:43.831 10:23:49 accel -- accel/accel.sh@63 -- # waitforlisten 1745861 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@829 -- # '[' -z 1745861 ']' 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.831 10:23:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.831 10:23:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:43.831 10:23:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 10:23:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.831 10:23:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.831 10:23:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.831 10:23:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.831 10:23:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.831 10:23:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:43.831 10:23:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:43.831 [2024-07-22 10:23:49.468795] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:43.831 [2024-07-22 10:23:49.468862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745861 ] 00:07:43.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.092 [2024-07-22 10:23:49.536402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.092 [2024-07-22 10:23:49.572731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@862 -- # return 0 00:07:44.681 10:23:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:44.681 10:23:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:44.681 10:23:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:44.681 10:23:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:44.681 10:23:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:44.681 10:23:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:44.681 10:23:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:44.681 10:23:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:44.681 10:23:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:44.681 10:23:50 accel -- accel/accel.sh@75 -- # killprocess 1745861 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@948 -- # '[' -z 1745861 ']' 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@952 -- # kill -0 1745861 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@953 -- # uname 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745861 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745861' 00:07:44.681 killing process with pid 1745861 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@967 -- # kill 1745861 00:07:44.681 10:23:50 accel -- common/autotest_common.sh@972 -- # wait 1745861 00:07:44.940 10:23:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:44.940 10:23:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.941 10:23:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:44.941 10:23:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:44.941 10:23:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.941 10:23:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.941 10:23:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.941 10:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.200 ************************************ 00:07:45.200 START TEST accel_missing_filename 00:07:45.200 ************************************ 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.200 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:45.200 10:23:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:45.200 [2024-07-22 10:23:50.696277] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:45.200 [2024-07-22 10:23:50.696351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746050 ] 00:07:45.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.200 [2024-07-22 10:23:50.763760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.200 [2024-07-22 10:23:50.795319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.200 [2024-07-22 10:23:50.827192] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.200 [2024-07-22 10:23:50.864336] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:45.459 A filename is required. 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.459 00:07:45.459 real 0m0.239s 00:07:45.459 user 0m0.166s 00:07:45.459 sys 0m0.113s 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.459 10:23:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:45.459 ************************************ 00:07:45.459 END TEST accel_missing_filename 00:07:45.459 ************************************ 00:07:45.459 10:23:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.459 10:23:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.459 10:23:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:45.459 10:23:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.459 10:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.459 ************************************ 00:07:45.459 START TEST accel_compress_verify 00:07:45.459 ************************************ 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.459 10:23:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:45.459 10:23:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:45.459 [2024-07-22 10:23:51.009878] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:45.459 [2024-07-22 10:23:51.009959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746227 ] 00:07:45.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.459 [2024-07-22 10:23:51.079416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.459 [2024-07-22 10:23:51.114288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.459 [2024-07-22 10:23:51.146877] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.718 [2024-07-22 10:23:51.184350] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:45.718 00:07:45.718 Compression does not support the verify option, aborting. 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.718 00:07:45.718 real 0m0.245s 00:07:45.718 user 0m0.174s 00:07:45.718 sys 0m0.113s 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.718 10:23:51 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:45.718 ************************************ 00:07:45.718 END TEST accel_compress_verify 00:07:45.718 ************************************ 00:07:45.718 10:23:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.718 10:23:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:45.718 10:23:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:45.718 10:23:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.718 10:23:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.718 ************************************ 00:07:45.718 START TEST accel_wrong_workload 00:07:45.718 ************************************ 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.718 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:45.719 10:23:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:45.719 Unsupported workload type: foobar 00:07:45.719 [2024-07-22 10:23:51.328895] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:45.719 accel_perf options: 00:07:45.719 [-h help message] 00:07:45.719 [-q queue depth per core] 00:07:45.719 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:45.719 [-T number of threads per core 00:07:45.719 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:45.719 [-t time in seconds] 00:07:45.719 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:45.719 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:45.719 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:45.719 [-l for compress/decompress workloads, name of uncompressed input file 00:07:45.719 [-S for crc32c workload, use this seed value (default 0) 00:07:45.719 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:45.719 [-f for fill workload, use this BYTE value (default 255) 00:07:45.719 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:45.719 [-y verify result if this switch is on] 00:07:45.719 [-a tasks to allocate per core (default: same value as -q)] 00:07:45.719 Can be used to spread operations across a wider range of memory. 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.719 00:07:45.719 real 0m0.038s 00:07:45.719 user 0m0.024s 00:07:45.719 sys 0m0.014s 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.719 10:23:51 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:45.719 ************************************ 00:07:45.719 END TEST accel_wrong_workload 00:07:45.719 ************************************ 00:07:45.719 Error: writing output failed: Broken pipe 00:07:45.719 10:23:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.719 10:23:51 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:45.719 10:23:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:45.719 10:23:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.719 10:23:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.719 ************************************ 00:07:45.719 START TEST accel_negative_buffers 00:07:45.719 ************************************ 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.719 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:45.719 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:45.719 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:45.719 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.719 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.978 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:45.978 10:23:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:45.978 -x option must be non-negative. 00:07:45.978 [2024-07-22 10:23:51.440789] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:45.978 accel_perf options: 00:07:45.978 [-h help message] 00:07:45.978 [-q queue depth per core] 00:07:45.978 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:45.978 [-T number of threads per core 00:07:45.978 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:45.978 [-t time in seconds] 00:07:45.978 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:45.978 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:45.978 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:45.978 [-l for compress/decompress workloads, name of uncompressed input file 00:07:45.978 [-S for crc32c workload, use this seed value (default 0) 00:07:45.978 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:45.978 [-f for fill workload, use this BYTE value (default 255) 00:07:45.978 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:45.978 [-y verify result if this switch is on] 00:07:45.978 [-a tasks to allocate per core (default: same value as -q)] 00:07:45.978 Can be used to spread operations across a wider range of memory. 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.978 00:07:45.978 real 0m0.039s 00:07:45.978 user 0m0.025s 00:07:45.978 sys 0m0.014s 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.978 10:23:51 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 ************************************ 00:07:45.978 END TEST accel_negative_buffers 00:07:45.978 ************************************ 00:07:45.978 Error: writing output failed: Broken pipe 00:07:45.978 10:23:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.978 10:23:51 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:45.978 10:23:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.978 10:23:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.978 10:23:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 ************************************ 00:07:45.978 START TEST accel_crc32c 00:07:45.978 ************************************ 00:07:45.978 10:23:51 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:45.978 10:23:51 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:45.978 [2024-07-22 10:23:51.548609] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:45.978 [2024-07-22 10:23:51.548682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746326 ] 00:07:45.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.978 [2024-07-22 10:23:51.619101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.978 [2024-07-22 10:23:51.657196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.237 10:23:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:47.177 10:23:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.177 00:07:47.177 real 0m1.252s 00:07:47.177 user 0m1.145s 00:07:47.177 sys 0m0.118s 00:07:47.177 10:23:52 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.177 10:23:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:47.177 ************************************ 00:07:47.177 END TEST accel_crc32c 00:07:47.177 ************************************ 00:07:47.177 10:23:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.177 10:23:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:47.177 10:23:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:47.177 10:23:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.177 10:23:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.177 ************************************ 00:07:47.177 START TEST accel_crc32c_C2 00:07:47.177 ************************************ 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.177 10:23:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:47.437 [2024-07-22 10:23:52.876817] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:47.437 [2024-07-22 10:23:52.876909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746675 ] 00:07:47.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.437 [2024-07-22 10:23:52.942646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.437 [2024-07-22 10:23:52.972712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.437 10:23:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.822 00:07:48.822 real 0m1.240s 00:07:48.822 user 0m1.143s 00:07:48.822 sys 0m0.108s 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.822 10:23:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:48.822 ************************************ 00:07:48.822 END TEST accel_crc32c_C2 00:07:48.822 ************************************ 00:07:48.822 10:23:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.822 10:23:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:48.822 10:23:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:48.822 10:23:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.822 10:23:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.822 ************************************ 00:07:48.822 START TEST accel_copy 00:07:48.822 ************************************ 00:07:48.822 10:23:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:48.822 [2024-07-22 10:23:54.190940] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:48.822 [2024-07-22 10:23:54.191005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746886 ] 00:07:48.822 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.822 [2024-07-22 10:23:54.259715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.822 [2024-07-22 10:23:54.294779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.822 10:23:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:49.765 10:23:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.765 00:07:49.765 real 0m1.247s 00:07:49.765 user 0m1.146s 00:07:49.765 sys 0m0.112s 00:07:49.765 10:23:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.765 10:23:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.765 ************************************ 00:07:49.765 END TEST accel_copy 00:07:49.765 ************************************ 00:07:49.765 10:23:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.765 10:23:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.765 10:23:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.765 10:23:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.765 10:23:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.025 ************************************ 00:07:50.025 START TEST accel_fill 00:07:50.025 ************************************ 00:07:50.025 10:23:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:50.025 [2024-07-22 10:23:55.513392] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:50.025 [2024-07-22 10:23:55.513465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747070 ] 00:07:50.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.025 [2024-07-22 10:23:55.581018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.025 [2024-07-22 10:23:55.612101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.025 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.026 10:23:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:51.412 10:23:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.412 00:07:51.412 real 0m1.241s 00:07:51.412 user 0m1.141s 00:07:51.412 sys 0m0.111s 00:07:51.412 10:23:56 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.412 10:23:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:51.412 ************************************ 00:07:51.412 END TEST accel_fill 00:07:51.412 ************************************ 00:07:51.412 10:23:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.412 10:23:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:51.412 10:23:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.412 10:23:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.412 10:23:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.412 ************************************ 00:07:51.412 START TEST accel_copy_crc32c 00:07:51.412 ************************************ 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:51.412 [2024-07-22 10:23:56.830792] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:51.412 [2024-07-22 10:23:56.830887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747414 ] 00:07:51.412 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.412 [2024-07-22 10:23:56.897376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.412 [2024-07-22 10:23:56.928896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.412 10:23:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.351 00:07:52.351 real 0m1.242s 00:07:52.351 user 0m1.150s 00:07:52.351 sys 0m0.105s 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.351 10:23:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:52.351 ************************************ 00:07:52.351 END TEST accel_copy_crc32c 00:07:52.351 ************************************ 00:07:52.612 10:23:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.612 10:23:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.612 10:23:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.612 10:23:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.612 10:23:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.612 ************************************ 00:07:52.612 START TEST accel_copy_crc32c_C2 00:07:52.612 ************************************ 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:52.612 [2024-07-22 10:23:58.146989] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:52.612 [2024-07-22 10:23:58.147083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747761 ] 00:07:52.612 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.612 [2024-07-22 10:23:58.213209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.612 [2024-07-22 10:23:58.244893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.612 10:23:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.996 00:07:53.996 real 0m1.242s 00:07:53.996 user 0m1.146s 00:07:53.996 sys 0m0.109s 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.996 10:23:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:53.996 ************************************ 00:07:53.996 END TEST accel_copy_crc32c_C2 00:07:53.996 ************************************ 00:07:53.996 10:23:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.996 10:23:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:53.996 10:23:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:53.996 10:23:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.996 10:23:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.996 ************************************ 00:07:53.996 START TEST accel_dualcast 00:07:53.996 ************************************ 00:07:53.996 10:23:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:53.996 [2024-07-22 10:23:59.462145] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:53.996 [2024-07-22 10:23:59.462226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748051 ] 00:07:53.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.996 [2024-07-22 10:23:59.530232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.996 [2024-07-22 10:23:59.563499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:53.996 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.997 10:23:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:55.380 10:24:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.380 00:07:55.380 real 0m1.247s 00:07:55.380 user 0m1.148s 00:07:55.380 sys 0m0.110s 00:07:55.380 10:24:00 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.380 10:24:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:55.380 ************************************ 00:07:55.380 END TEST accel_dualcast 00:07:55.380 ************************************ 00:07:55.380 10:24:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.380 10:24:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:55.380 10:24:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:55.380 10:24:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.380 10:24:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.380 ************************************ 00:07:55.380 START TEST accel_compare 00:07:55.380 ************************************ 00:07:55.380 10:24:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:55.380 [2024-07-22 10:24:00.784519] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:55.380 [2024-07-22 10:24:00.784585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748250 ] 00:07:55.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.380 [2024-07-22 10:24:00.853707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.380 [2024-07-22 10:24:00.889316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.380 10:24:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:56.323 10:24:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.323 00:07:56.323 real 0m1.249s 00:07:56.323 user 0m1.149s 00:07:56.323 sys 0m0.110s 00:07:56.323 10:24:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.323 10:24:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:56.323 ************************************ 00:07:56.323 END TEST accel_compare 00:07:56.323 ************************************ 00:07:56.584 10:24:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.584 10:24:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:56.584 10:24:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.584 10:24:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.584 10:24:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 ************************************ 00:07:56.584 START TEST accel_xor 00:07:56.584 ************************************ 00:07:56.584 10:24:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:56.584 [2024-07-22 10:24:02.110218] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:56.584 [2024-07-22 10:24:02.110284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748607 ] 00:07:56.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.584 [2024-07-22 10:24:02.180000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.584 [2024-07-22 10:24:02.213480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.584 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.585 10:24:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.969 00:07:57.969 real 0m1.247s 00:07:57.969 user 0m1.148s 00:07:57.969 sys 0m0.111s 00:07:57.969 10:24:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.969 10:24:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:57.969 ************************************ 00:07:57.969 END TEST accel_xor 00:07:57.969 ************************************ 00:07:57.969 10:24:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.969 10:24:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:57.969 10:24:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:57.969 10:24:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.969 10:24:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.969 ************************************ 00:07:57.969 START TEST accel_xor 00:07:57.969 ************************************ 00:07:57.969 10:24:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:57.969 [2024-07-22 10:24:03.435575] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:57.969 [2024-07-22 10:24:03.435679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748956 ] 00:07:57.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.969 [2024-07-22 10:24:03.509760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.969 [2024-07-22 10:24:03.542549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.969 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.970 10:24:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.351 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.352 10:24:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:59.352 10:24:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.352 00:07:59.352 real 0m1.253s 00:07:59.352 user 0m1.145s 00:07:59.352 sys 0m0.118s 00:07:59.352 10:24:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.352 10:24:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:59.352 ************************************ 00:07:59.352 END TEST accel_xor 00:07:59.352 ************************************ 00:07:59.352 10:24:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.352 10:24:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:59.352 10:24:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:59.352 10:24:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.352 10:24:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.352 ************************************ 00:07:59.352 START TEST accel_dif_verify 00:07:59.352 ************************************ 00:07:59.352 10:24:04 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:59.352 [2024-07-22 10:24:04.763175] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:07:59.352 [2024-07-22 10:24:04.763248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749309 ] 00:07:59.352 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.352 [2024-07-22 10:24:04.832569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.352 [2024-07-22 10:24:04.866562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:59.352 10:24:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:00.291 10:24:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.291 00:08:00.291 real 0m1.248s 00:08:00.291 user 0m1.147s 00:08:00.291 sys 0m0.114s 00:08:00.291 10:24:05 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.291 10:24:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.291 ************************************ 00:08:00.291 END TEST accel_dif_verify 00:08:00.291 ************************************ 00:08:00.551 10:24:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.551 10:24:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:00.551 10:24:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:00.551 10:24:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.551 10:24:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.551 ************************************ 00:08:00.551 START TEST accel_dif_generate 00:08:00.551 ************************************ 00:08:00.551 10:24:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:00.551 [2024-07-22 10:24:06.085595] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:00.551 [2024-07-22 10:24:06.085673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749514 ] 00:08:00.551 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.551 [2024-07-22 10:24:06.155127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.551 [2024-07-22 10:24:06.190288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.551 10:24:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:01.934 10:24:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.934 00:08:01.934 real 0m1.248s 00:08:01.934 user 0m1.139s 00:08:01.934 sys 0m0.123s 00:08:01.934 10:24:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.934 10:24:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:01.934 ************************************ 00:08:01.934 END TEST accel_dif_generate 00:08:01.934 ************************************ 00:08:01.934 10:24:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.934 10:24:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:01.934 10:24:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:01.934 10:24:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.934 10:24:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.934 ************************************ 00:08:01.934 START TEST accel_dif_generate_copy 00:08:01.934 ************************************ 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:01.934 [2024-07-22 10:24:07.408132] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:01.934 [2024-07-22 10:24:07.408194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749699 ] 00:08:01.934 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.934 [2024-07-22 10:24:07.476391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.934 [2024-07-22 10:24:07.509844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:01.934 10:24:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.317 00:08:03.317 real 0m1.246s 00:08:03.317 user 0m1.149s 00:08:03.317 sys 0m0.108s 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.317 10:24:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.317 ************************************ 00:08:03.317 END TEST accel_dif_generate_copy 00:08:03.317 ************************************ 00:08:03.317 10:24:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.317 10:24:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:03.317 10:24:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.317 10:24:08 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:03.317 10:24:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.317 10:24:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.317 ************************************ 00:08:03.317 START TEST accel_comp 00:08:03.317 ************************************ 00:08:03.317 10:24:08 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:03.317 10:24:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:03.318 [2024-07-22 10:24:08.730820] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:03.318 [2024-07-22 10:24:08.730902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750258 ] 00:08:03.318 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.318 [2024-07-22 10:24:08.799790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.318 [2024-07-22 10:24:08.835393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:03.318 10:24:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.261 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:04.262 10:24:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.262 00:08:04.262 real 0m1.253s 00:08:04.262 user 0m1.156s 00:08:04.262 sys 0m0.108s 00:08:04.262 10:24:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.262 10:24:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:04.262 ************************************ 00:08:04.262 END TEST accel_comp 00:08:04.262 ************************************ 00:08:04.524 10:24:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.524 10:24:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.524 10:24:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:04.524 10:24:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.524 10:24:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.524 ************************************ 00:08:04.524 START TEST accel_decomp 00:08:04.524 ************************************ 00:08:04.524 10:24:10 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:04.524 10:24:10 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:04.524 [2024-07-22 10:24:10.057964] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:04.524 [2024-07-22 10:24:10.058027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750799 ] 00:08:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.525 [2024-07-22 10:24:10.127039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.525 [2024-07-22 10:24:10.161928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:04.525 10:24:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:05.913 10:24:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.913 00:08:05.913 real 0m1.252s 00:08:05.913 user 0m1.149s 00:08:05.913 sys 0m0.114s 00:08:05.913 10:24:11 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.913 10:24:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:05.913 ************************************ 00:08:05.913 END TEST accel_decomp 00:08:05.913 ************************************ 00:08:05.913 10:24:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.913 10:24:11 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.913 10:24:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:05.913 10:24:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.913 10:24:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.913 ************************************ 00:08:05.913 START TEST accel_decomp_full 00:08:05.913 ************************************ 00:08:05.913 10:24:11 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:05.913 [2024-07-22 10:24:11.386918] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:05.913 [2024-07-22 10:24:11.387023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751175 ] 00:08:05.913 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.913 [2024-07-22 10:24:11.459260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.913 [2024-07-22 10:24:11.491939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.913 10:24:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:07.295 10:24:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.295 00:08:07.295 real 0m1.263s 00:08:07.295 user 0m1.156s 00:08:07.295 sys 0m0.120s 00:08:07.295 10:24:12 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.295 10:24:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:07.295 ************************************ 00:08:07.295 END TEST accel_decomp_full 00:08:07.295 ************************************ 00:08:07.295 10:24:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.295 10:24:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.295 10:24:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:07.295 10:24:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.295 10:24:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.295 ************************************ 00:08:07.296 START TEST accel_decomp_mcore 00:08:07.296 ************************************ 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:07.296 [2024-07-22 10:24:12.721951] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:07.296 [2024-07-22 10:24:12.722017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751345 ] 00:08:07.296 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.296 [2024-07-22 10:24:12.791851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.296 [2024-07-22 10:24:12.830516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.296 [2024-07-22 10:24:12.830734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.296 [2024-07-22 10:24:12.831063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.296 [2024-07-22 10:24:12.831064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.296 10:24:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.677 00:08:08.677 real 0m1.263s 00:08:08.677 user 0m4.389s 00:08:08.677 sys 0m0.122s 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.677 10:24:13 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:08.677 ************************************ 00:08:08.677 END TEST accel_decomp_mcore 00:08:08.677 ************************************ 00:08:08.677 10:24:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.677 10:24:13 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.677 10:24:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:08.677 10:24:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.677 10:24:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.677 ************************************ 00:08:08.677 START TEST accel_decomp_full_mcore 00:08:08.677 ************************************ 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:08.677 [2024-07-22 10:24:14.060410] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:08.677 [2024-07-22 10:24:14.060474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751585 ] 00:08:08.677 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.677 [2024-07-22 10:24:14.127835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.677 [2024-07-22 10:24:14.161149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.677 [2024-07-22 10:24:14.161263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.677 [2024-07-22 10:24:14.161431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.677 [2024-07-22 10:24:14.161431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.677 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.678 10:24:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.619 00:08:09.619 real 0m1.266s 00:08:09.619 user 0m4.434s 00:08:09.619 sys 0m0.113s 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.619 10:24:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:09.619 ************************************ 00:08:09.619 END TEST accel_decomp_full_mcore 00:08:09.619 ************************************ 00:08:09.880 10:24:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.881 10:24:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.881 10:24:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:09.881 10:24:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.881 10:24:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.881 ************************************ 00:08:09.881 START TEST accel_decomp_mthread 00:08:09.881 ************************************ 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:09.881 [2024-07-22 10:24:15.400260] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:09.881 [2024-07-22 10:24:15.400347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751937 ] 00:08:09.881 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.881 [2024-07-22 10:24:15.466528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.881 [2024-07-22 10:24:15.498524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.881 10:24:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.267 00:08:11.267 real 0m1.249s 00:08:11.267 user 0m1.156s 00:08:11.267 sys 0m0.106s 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.267 10:24:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:11.267 ************************************ 00:08:11.267 END TEST accel_decomp_mthread 00:08:11.267 ************************************ 00:08:11.267 10:24:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.267 10:24:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.267 10:24:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:11.267 10:24:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.267 10:24:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.267 ************************************ 00:08:11.267 START TEST accel_decomp_full_mthread 00:08:11.267 ************************************ 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.267 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:11.268 [2024-07-22 10:24:16.721632] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:11.268 [2024-07-22 10:24:16.721699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752284 ] 00:08:11.268 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.268 [2024-07-22 10:24:16.789680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.268 [2024-07-22 10:24:16.823430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.268 10:24:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.726 00:08:12.726 real 0m1.275s 00:08:12.726 user 0m1.181s 00:08:12.726 sys 0m0.107s 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.726 10:24:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:12.726 ************************************ 00:08:12.726 END TEST accel_decomp_full_mthread 00:08:12.726 ************************************ 00:08:12.726 10:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.726 10:24:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:12.726 10:24:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.726 10:24:18 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.726 10:24:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:12.726 10:24:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.726 10:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.726 10:24:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.726 10:24:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.726 10:24:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.726 10:24:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.726 10:24:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.726 10:24:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:12.726 10:24:18 accel -- accel/accel.sh@41 -- # jq -r . 00:08:12.726 ************************************ 00:08:12.726 START TEST accel_dif_functional_tests 00:08:12.726 ************************************ 00:08:12.726 10:24:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.726 [2024-07-22 10:24:18.094258] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:12.726 [2024-07-22 10:24:18.094308] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752580 ] 00:08:12.726 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.726 [2024-07-22 10:24:18.159995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.726 [2024-07-22 10:24:18.194119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.726 [2024-07-22 10:24:18.194234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.726 [2024-07-22 10:24:18.194237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.726 00:08:12.726 00:08:12.726 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.726 http://cunit.sourceforge.net/ 00:08:12.726 00:08:12.726 00:08:12.726 Suite: accel_dif 00:08:12.726 Test: verify: DIF generated, GUARD check ...passed 00:08:12.726 Test: verify: DIF generated, APPTAG check ...passed 00:08:12.726 Test: verify: DIF generated, REFTAG check ...passed 00:08:12.726 Test: verify: DIF not generated, GUARD check ...[2024-07-22 10:24:18.243022] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.726 passed 00:08:12.726 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 10:24:18.243067] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.726 passed 00:08:12.726 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 10:24:18.243087] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.726 passed 00:08:12.726 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:12.726 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 10:24:18.243135] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:12.726 passed 00:08:12.726 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:12.726 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:12.726 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:12.726 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 10:24:18.243247] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:12.726 passed 00:08:12.726 Test: verify copy: DIF generated, GUARD check ...passed 00:08:12.726 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:12.726 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:12.726 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 10:24:18.243370] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.726 passed 00:08:12.726 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 10:24:18.243392] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.726 passed 00:08:12.726 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 10:24:18.243418] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.726 passed 00:08:12.726 Test: generate copy: DIF generated, GUARD check ...passed 00:08:12.726 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:12.726 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:12.726 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:12.726 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:12.726 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:12.726 Test: generate copy: iovecs-len validate ...[2024-07-22 10:24:18.243598] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:12.726 passed 00:08:12.726 Test: generate copy: buffer alignment validate ...passed 00:08:12.726 00:08:12.726 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.726 suites 1 1 n/a 0 0 00:08:12.726 tests 26 26 26 0 0 00:08:12.726 asserts 115 115 115 0 n/a 00:08:12.726 00:08:12.726 Elapsed time = 0.002 seconds 00:08:12.726 00:08:12.726 real 0m0.300s 00:08:12.726 user 0m0.413s 00:08:12.726 sys 0m0.135s 00:08:12.726 10:24:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.726 10:24:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:12.726 ************************************ 00:08:12.726 END TEST accel_dif_functional_tests 00:08:12.726 ************************************ 00:08:12.726 10:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.726 00:08:12.727 real 0m29.074s 00:08:12.727 user 0m32.442s 00:08:12.727 sys 0m4.339s 00:08:12.727 10:24:18 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.727 10:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.727 ************************************ 00:08:12.727 END TEST accel 00:08:12.727 ************************************ 00:08:13.001 10:24:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:13.001 10:24:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:13.001 10:24:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.001 10:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.001 10:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.001 ************************************ 00:08:13.001 START TEST accel_rpc 00:08:13.001 ************************************ 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:13.001 * Looking for test storage... 00:08:13.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:13.001 10:24:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.001 10:24:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1752708 00:08:13.001 10:24:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1752708 00:08:13.001 10:24:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1752708 ']' 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.001 10:24:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.001 [2024-07-22 10:24:18.621020] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:13.001 [2024-07-22 10:24:18.621096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752708 ] 00:08:13.001 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.001 [2024-07-22 10:24:18.694392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.262 [2024-07-22 10:24:18.732260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.832 10:24:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.832 10:24:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:13.832 10:24:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:13.832 10:24:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:13.832 10:24:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:13.832 10:24:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:13.832 10:24:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:13.832 10:24:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.832 10:24:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.832 10:24:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.832 ************************************ 00:08:13.832 START TEST accel_assign_opcode 00:08:13.832 ************************************ 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:13.832 [2024-07-22 10:24:19.442350] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:13.832 [2024-07-22 10:24:19.454375] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.832 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.092 software 00:08:14.092 00:08:14.092 real 0m0.208s 00:08:14.092 user 0m0.050s 00:08:14.092 sys 0m0.011s 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.092 10:24:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:14.092 ************************************ 00:08:14.092 END TEST accel_assign_opcode 00:08:14.092 ************************************ 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:14.092 10:24:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1752708 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1752708 ']' 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1752708 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752708 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752708' 00:08:14.092 killing process with pid 1752708 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 1752708 00:08:14.092 10:24:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 1752708 00:08:14.353 00:08:14.353 real 0m1.470s 00:08:14.353 user 0m1.535s 00:08:14.353 sys 0m0.433s 00:08:14.353 10:24:19 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.353 10:24:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 ************************************ 00:08:14.353 END TEST accel_rpc 00:08:14.353 ************************************ 00:08:14.353 10:24:19 -- common/autotest_common.sh@1142 -- # return 0 00:08:14.353 10:24:19 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:14.353 10:24:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.353 10:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.353 10:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 ************************************ 00:08:14.353 START TEST app_cmdline 00:08:14.353 ************************************ 00:08:14.353 10:24:20 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:14.613 * Looking for test storage... 00:08:14.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:14.613 10:24:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:14.613 10:24:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1753117 00:08:14.613 10:24:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1753117 00:08:14.613 10:24:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1753117 ']' 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.613 10:24:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.613 [2024-07-22 10:24:20.167017] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:14.613 [2024-07-22 10:24:20.167092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753117 ] 00:08:14.613 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.613 [2024-07-22 10:24:20.238247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.613 [2024-07-22 10:24:20.277948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.553 10:24:20 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.553 10:24:20 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:15.553 10:24:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:15.553 { 00:08:15.553 "version": "SPDK v24.09-pre git sha1 8fb860b73", 00:08:15.553 "fields": { 00:08:15.553 "major": 24, 00:08:15.553 "minor": 9, 00:08:15.553 "patch": 0, 00:08:15.553 "suffix": "-pre", 00:08:15.553 "commit": "8fb860b73" 00:08:15.553 } 00:08:15.553 } 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:15.553 10:24:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:15.553 10:24:21 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.813 request: 00:08:15.813 { 00:08:15.813 "method": "env_dpdk_get_mem_stats", 00:08:15.813 "req_id": 1 00:08:15.813 } 00:08:15.813 Got JSON-RPC error response 00:08:15.813 response: 00:08:15.813 { 00:08:15.813 "code": -32601, 00:08:15.813 "message": "Method not found" 00:08:15.813 } 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.813 10:24:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1753117 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1753117 ']' 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1753117 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1753117 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1753117' 00:08:15.813 killing process with pid 1753117 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 1753117 00:08:15.813 10:24:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 1753117 00:08:16.073 00:08:16.073 real 0m1.517s 00:08:16.073 user 0m1.829s 00:08:16.073 sys 0m0.381s 00:08:16.073 10:24:21 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.073 10:24:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:16.073 ************************************ 00:08:16.073 END TEST app_cmdline 00:08:16.073 ************************************ 00:08:16.073 10:24:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:16.073 10:24:21 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:16.073 10:24:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.073 10:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.073 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.073 ************************************ 00:08:16.073 START TEST version 00:08:16.073 ************************************ 00:08:16.073 10:24:21 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:16.073 * Looking for test storage... 00:08:16.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:16.073 10:24:21 version -- app/version.sh@17 -- # get_header_version major 00:08:16.073 10:24:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # cut -f2 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:16.073 10:24:21 version -- app/version.sh@17 -- # major=24 00:08:16.073 10:24:21 version -- app/version.sh@18 -- # get_header_version minor 00:08:16.073 10:24:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # cut -f2 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:16.073 10:24:21 version -- app/version.sh@18 -- # minor=9 00:08:16.073 10:24:21 version -- app/version.sh@19 -- # get_header_version patch 00:08:16.073 10:24:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # cut -f2 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:16.073 10:24:21 version -- app/version.sh@19 -- # patch=0 00:08:16.073 10:24:21 version -- app/version.sh@20 -- # get_header_version suffix 00:08:16.073 10:24:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # cut -f2 00:08:16.073 10:24:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:16.074 10:24:21 version -- app/version.sh@20 -- # suffix=-pre 00:08:16.074 10:24:21 version -- app/version.sh@22 -- # version=24.9 00:08:16.074 10:24:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:16.074 10:24:21 version -- app/version.sh@28 -- # version=24.9rc0 00:08:16.074 10:24:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:16.074 10:24:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:16.335 10:24:21 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:16.335 10:24:21 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:16.335 00:08:16.335 real 0m0.172s 00:08:16.335 user 0m0.087s 00:08:16.335 sys 0m0.124s 00:08:16.335 10:24:21 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.335 10:24:21 version -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 ************************************ 00:08:16.335 END TEST version 00:08:16.335 ************************************ 00:08:16.335 10:24:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:16.335 10:24:21 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@198 -- # uname -s 00:08:16.335 10:24:21 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:16.335 10:24:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:16.335 10:24:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:16.335 10:24:21 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:16.335 10:24:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.335 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 10:24:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:16.335 10:24:21 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:16.335 10:24:21 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:16.335 10:24:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.335 10:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.335 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 ************************************ 00:08:16.335 START TEST nvmf_tcp 00:08:16.335 ************************************ 00:08:16.335 10:24:21 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:16.335 * Looking for test storage... 00:08:16.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.335 10:24:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.335 10:24:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.335 10:24:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.335 10:24:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.335 10:24:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.335 10:24:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.335 10:24:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:16.335 10:24:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:16.335 10:24:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:16.596 10:24:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.596 10:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:16.596 10:24:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:16.596 10:24:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.596 10:24:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.596 10:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.596 ************************************ 00:08:16.596 START TEST nvmf_example 00:08:16.596 ************************************ 00:08:16.596 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:16.596 * Looking for test storage... 00:08:16.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.597 10:24:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:24.738 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:24.738 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:24.738 Found net devices under 0000:31:00.0: cvl_0_0 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:24.738 Found net devices under 0000:31:00.1: cvl_0_1 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:08:24.738 00:08:24.738 --- 10.0.0.2 ping statistics --- 00:08:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.738 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:08:24.738 00:08:24.738 --- 10.0.0.1 ping statistics --- 00:08:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.738 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1757904 00:08:24.738 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1757904 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1757904 ']' 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.739 10:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:24.999 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.569 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.569 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:25.569 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:25.569 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.569 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:25.829 10:24:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:25.829 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.878 Initializing NVMe Controllers 00:08:35.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:35.878 Initialization complete. Launching workers. 00:08:35.878 ======================================================== 00:08:35.878 Latency(us) 00:08:35.878 Device Information : IOPS MiB/s Average min max 00:08:35.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18495.80 72.25 3462.02 798.43 16250.54 00:08:35.878 ======================================================== 00:08:35.878 Total : 18495.80 72.25 3462.02 798.43 16250.54 00:08:35.878 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.138 rmmod nvme_tcp 00:08:36.138 rmmod nvme_fabrics 00:08:36.138 rmmod nvme_keyring 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1757904 ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1757904 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1757904 ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1757904 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757904 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757904' 00:08:36.138 killing process with pid 1757904 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1757904 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1757904 00:08:36.138 nvmf threads initialize successfully 00:08:36.138 bdev subsystem init successfully 00:08:36.138 created a nvmf target service 00:08:36.138 create targets's poll groups done 00:08:36.138 all subsystems of target started 00:08:36.138 nvmf target is running 00:08:36.138 all subsystems of target stopped 00:08:36.138 destroy targets's poll groups done 00:08:36.138 destroyed the nvmf target service 00:08:36.138 bdev subsystem finish successfully 00:08:36.138 nvmf threads destroy successfully 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.138 10:24:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:38.680 00:08:38.680 real 0m21.838s 00:08:38.680 user 0m45.781s 00:08:38.680 sys 0m7.536s 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.680 10:24:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:38.680 ************************************ 00:08:38.680 END TEST nvmf_example 00:08:38.680 ************************************ 00:08:38.680 10:24:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.680 10:24:43 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:38.680 10:24:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.680 10:24:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.680 10:24:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.680 ************************************ 00:08:38.680 START TEST nvmf_filesystem 00:08:38.680 ************************************ 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:38.680 * Looking for test storage... 00:08:38.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:38.680 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:38.681 #define SPDK_CONFIG_H 00:08:38.681 #define SPDK_CONFIG_APPS 1 00:08:38.681 #define SPDK_CONFIG_ARCH native 00:08:38.681 #undef SPDK_CONFIG_ASAN 00:08:38.681 #undef SPDK_CONFIG_AVAHI 00:08:38.681 #undef SPDK_CONFIG_CET 00:08:38.681 #define SPDK_CONFIG_COVERAGE 1 00:08:38.681 #define SPDK_CONFIG_CROSS_PREFIX 00:08:38.681 #undef SPDK_CONFIG_CRYPTO 00:08:38.681 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:38.681 #undef SPDK_CONFIG_CUSTOMOCF 00:08:38.681 #undef SPDK_CONFIG_DAOS 00:08:38.681 #define SPDK_CONFIG_DAOS_DIR 00:08:38.681 #define SPDK_CONFIG_DEBUG 1 00:08:38.681 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:38.681 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:38.681 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:38.681 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:38.681 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:38.681 #undef SPDK_CONFIG_DPDK_UADK 00:08:38.681 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:38.681 #define SPDK_CONFIG_EXAMPLES 1 00:08:38.681 #undef SPDK_CONFIG_FC 00:08:38.681 #define SPDK_CONFIG_FC_PATH 00:08:38.681 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:38.681 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:38.681 #undef SPDK_CONFIG_FUSE 00:08:38.681 #undef SPDK_CONFIG_FUZZER 00:08:38.681 #define SPDK_CONFIG_FUZZER_LIB 00:08:38.681 #undef SPDK_CONFIG_GOLANG 00:08:38.681 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:38.681 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:38.681 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:38.681 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:38.681 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:38.681 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:38.681 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:38.681 #define SPDK_CONFIG_IDXD 1 00:08:38.681 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:38.681 #undef SPDK_CONFIG_IPSEC_MB 00:08:38.681 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:38.681 #define SPDK_CONFIG_ISAL 1 00:08:38.681 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:38.681 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:38.681 #define SPDK_CONFIG_LIBDIR 00:08:38.681 #undef SPDK_CONFIG_LTO 00:08:38.681 #define SPDK_CONFIG_MAX_LCORES 128 00:08:38.681 #define SPDK_CONFIG_NVME_CUSE 1 00:08:38.681 #undef SPDK_CONFIG_OCF 00:08:38.681 #define SPDK_CONFIG_OCF_PATH 00:08:38.681 #define SPDK_CONFIG_OPENSSL_PATH 00:08:38.681 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:38.681 #define SPDK_CONFIG_PGO_DIR 00:08:38.681 #undef SPDK_CONFIG_PGO_USE 00:08:38.681 #define SPDK_CONFIG_PREFIX /usr/local 00:08:38.681 #undef SPDK_CONFIG_RAID5F 00:08:38.681 #undef SPDK_CONFIG_RBD 00:08:38.681 #define SPDK_CONFIG_RDMA 1 00:08:38.681 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:38.681 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:38.681 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:38.681 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:38.681 #define SPDK_CONFIG_SHARED 1 00:08:38.681 #undef SPDK_CONFIG_SMA 00:08:38.681 #define SPDK_CONFIG_TESTS 1 00:08:38.681 #undef SPDK_CONFIG_TSAN 00:08:38.681 #define SPDK_CONFIG_UBLK 1 00:08:38.681 #define SPDK_CONFIG_UBSAN 1 00:08:38.681 #undef SPDK_CONFIG_UNIT_TESTS 00:08:38.681 #undef SPDK_CONFIG_URING 00:08:38.681 #define SPDK_CONFIG_URING_PATH 00:08:38.681 #undef SPDK_CONFIG_URING_ZNS 00:08:38.681 #undef SPDK_CONFIG_USDT 00:08:38.681 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:38.681 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:38.681 #define SPDK_CONFIG_VFIO_USER 1 00:08:38.681 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:38.681 #define SPDK_CONFIG_VHOST 1 00:08:38.681 #define SPDK_CONFIG_VIRTIO 1 00:08:38.681 #undef SPDK_CONFIG_VTUNE 00:08:38.681 #define SPDK_CONFIG_VTUNE_DIR 00:08:38.681 #define SPDK_CONFIG_WERROR 1 00:08:38.681 #define SPDK_CONFIG_WPDK_DIR 00:08:38.681 #undef SPDK_CONFIG_XNVME 00:08:38.681 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:38.681 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1760700 ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1760700 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.1acxkn 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1acxkn/tests/target /tmp/spdk.1acxkn 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=121604554752 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370988544 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7766433792 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680783872 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.682 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864253440 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9945088 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684601344 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=892928 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:38.683 * Looking for test storage... 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=121604554752 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9981026304 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.683 10:24:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:46.829 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:46.829 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:46.829 Found net devices under 0000:31:00.0: cvl_0_0 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:46.829 Found net devices under 0000:31:00.1: cvl_0_1 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.829 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:08:47.091 00:08:47.091 --- 10.0.0.2 ping statistics --- 00:08:47.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.091 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:47.091 00:08:47.091 --- 10.0.0.1 ping statistics --- 00:08:47.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.091 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.091 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.092 ************************************ 00:08:47.092 START TEST nvmf_filesystem_no_in_capsule 00:08:47.092 ************************************ 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1765008 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1765008 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1765008 ']' 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.092 10:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.092 [2024-07-22 10:24:52.719436] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:47.092 [2024-07-22 10:24:52.719483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.092 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.354 [2024-07-22 10:24:52.791914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.354 [2024-07-22 10:24:52.826292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.354 [2024-07-22 10:24:52.826333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.354 [2024-07-22 10:24:52.826341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.354 [2024-07-22 10:24:52.826347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.354 [2024-07-22 10:24:52.826353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.354 [2024-07-22 10:24:52.826438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.354 [2024-07-22 10:24:52.826498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.354 [2024-07-22 10:24:52.826666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.354 [2024-07-22 10:24:52.826667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.923 [2024-07-22 10:24:53.536134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.923 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 Malloc1 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 [2024-07-22 10:24:53.669385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.182 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:48.182 { 00:08:48.182 "name": "Malloc1", 00:08:48.182 "aliases": [ 00:08:48.182 "b143f481-2253-4cd3-91cc-6ee9249b9ad6" 00:08:48.182 ], 00:08:48.182 "product_name": "Malloc disk", 00:08:48.182 "block_size": 512, 00:08:48.182 "num_blocks": 1048576, 00:08:48.182 "uuid": "b143f481-2253-4cd3-91cc-6ee9249b9ad6", 00:08:48.182 "assigned_rate_limits": { 00:08:48.182 "rw_ios_per_sec": 0, 00:08:48.182 "rw_mbytes_per_sec": 0, 00:08:48.182 "r_mbytes_per_sec": 0, 00:08:48.182 "w_mbytes_per_sec": 0 00:08:48.182 }, 00:08:48.182 "claimed": true, 00:08:48.182 "claim_type": "exclusive_write", 00:08:48.182 "zoned": false, 00:08:48.182 "supported_io_types": { 00:08:48.182 "read": true, 00:08:48.182 "write": true, 00:08:48.182 "unmap": true, 00:08:48.182 "flush": true, 00:08:48.182 "reset": true, 00:08:48.182 "nvme_admin": false, 00:08:48.182 "nvme_io": false, 00:08:48.182 "nvme_io_md": false, 00:08:48.182 "write_zeroes": true, 00:08:48.182 "zcopy": true, 00:08:48.182 "get_zone_info": false, 00:08:48.182 "zone_management": false, 00:08:48.182 "zone_append": false, 00:08:48.182 "compare": false, 00:08:48.182 "compare_and_write": false, 00:08:48.183 "abort": true, 00:08:48.183 "seek_hole": false, 00:08:48.183 "seek_data": false, 00:08:48.183 "copy": true, 00:08:48.183 "nvme_iov_md": false 00:08:48.183 }, 00:08:48.183 "memory_domains": [ 00:08:48.183 { 00:08:48.183 "dma_device_id": "system", 00:08:48.183 "dma_device_type": 1 00:08:48.183 }, 00:08:48.183 { 00:08:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.183 "dma_device_type": 2 00:08:48.183 } 00:08:48.183 ], 00:08:48.183 "driver_specific": {} 00:08:48.183 } 00:08:48.183 ]' 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:48.183 10:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.565 10:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.565 10:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:49.565 10:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.565 10:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:49.565 10:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:52.106 10:24:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.045 ************************************ 00:08:53.045 START TEST filesystem_ext4 00:08:53.045 ************************************ 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:53.045 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:53.045 mke2fs 1.46.5 (30-Dec-2021) 00:08:53.045 Discarding device blocks: 0/522240 done 00:08:53.045 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:53.045 Filesystem UUID: 29c090d8-8484-46ac-913c-8686e56da25b 00:08:53.045 Superblock backups stored on blocks: 00:08:53.045 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:53.045 00:08:53.045 Allocating group tables: 0/64 done 00:08:53.045 Writing inode tables: 0/64 done 00:08:53.305 Creating journal (8192 blocks): done 00:08:53.305 Writing superblocks and filesystem accounting information: 0/64 done 00:08:53.305 00:08:53.305 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:53.305 10:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1765008 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:54.291 00:08:54.291 real 0m1.446s 00:08:54.291 user 0m0.022s 00:08:54.291 sys 0m0.053s 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:54.291 ************************************ 00:08:54.291 END TEST filesystem_ext4 00:08:54.291 ************************************ 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.291 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.550 ************************************ 00:08:54.550 START TEST filesystem_btrfs 00:08:54.550 ************************************ 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:54.550 10:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:54.810 btrfs-progs v6.6.2 00:08:54.810 See https://btrfs.readthedocs.io for more information. 00:08:54.810 00:08:54.810 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:54.810 NOTE: several default settings have changed in version 5.15, please make sure 00:08:54.810 this does not affect your deployments: 00:08:54.810 - DUP for metadata (-m dup) 00:08:54.810 - enabled no-holes (-O no-holes) 00:08:54.810 - enabled free-space-tree (-R free-space-tree) 00:08:54.810 00:08:54.810 Label: (null) 00:08:54.810 UUID: 4c331447-68a8-4743-8cdb-080fcd68b6cb 00:08:54.810 Node size: 16384 00:08:54.810 Sector size: 4096 00:08:54.810 Filesystem size: 510.00MiB 00:08:54.810 Block group profiles: 00:08:54.810 Data: single 8.00MiB 00:08:54.810 Metadata: DUP 32.00MiB 00:08:54.810 System: DUP 8.00MiB 00:08:54.810 SSD detected: yes 00:08:54.810 Zoned device: no 00:08:54.810 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:54.810 Runtime features: free-space-tree 00:08:54.810 Checksum: crc32c 00:08:54.810 Number of devices: 1 00:08:54.810 Devices: 00:08:54.810 ID SIZE PATH 00:08:54.810 1 510.00MiB /dev/nvme0n1p1 00:08:54.810 00:08:54.810 10:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:54.810 10:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1765008 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.748 00:08:55.748 real 0m1.367s 00:08:55.748 user 0m0.026s 00:08:55.748 sys 0m0.063s 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.748 ************************************ 00:08:55.748 END TEST filesystem_btrfs 00:08:55.748 ************************************ 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.748 ************************************ 00:08:55.748 START TEST filesystem_xfs 00:08:55.748 ************************************ 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:55.748 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:56.007 10:25:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:56.007 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:56.007 = sectsz=512 attr=2, projid32bit=1 00:08:56.007 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:56.007 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:56.007 data = bsize=4096 blocks=130560, imaxpct=25 00:08:56.007 = sunit=0 swidth=0 blks 00:08:56.007 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:56.007 log =internal log bsize=4096 blocks=16384, version=2 00:08:56.007 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:56.007 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:56.946 Discarding blocks...Done. 00:08:56.946 10:25:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:56.946 10:25:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1765008 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.854 00:08:58.854 real 0m2.967s 00:08:58.854 user 0m0.028s 00:08:58.854 sys 0m0.053s 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:58.854 ************************************ 00:08:58.854 END TEST filesystem_xfs 00:08:58.854 ************************************ 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:58.854 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:59.114 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:59.114 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1765008 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1765008 ']' 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1765008 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765008 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765008' 00:08:59.375 killing process with pid 1765008 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1765008 00:08:59.375 10:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1765008 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:59.697 00:08:59.697 real 0m12.529s 00:08:59.697 user 0m49.494s 00:08:59.697 sys 0m1.050s 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.697 ************************************ 00:08:59.697 END TEST nvmf_filesystem_no_in_capsule 00:08:59.697 ************************************ 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.697 ************************************ 00:08:59.697 START TEST nvmf_filesystem_in_capsule 00:08:59.697 ************************************ 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1767611 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1767611 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1767611 ']' 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.697 10:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.697 [2024-07-22 10:25:05.326152] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:08:59.697 [2024-07-22 10:25:05.326206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.025 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.025 [2024-07-22 10:25:05.418087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.025 [2024-07-22 10:25:05.454526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.025 [2024-07-22 10:25:05.454565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.025 [2024-07-22 10:25:05.454573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.025 [2024-07-22 10:25:05.454579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.025 [2024-07-22 10:25:05.454585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.025 [2024-07-22 10:25:05.454674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.025 [2024-07-22 10:25:05.454793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.025 [2024-07-22 10:25:05.454950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.025 [2024-07-22 10:25:05.454950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.594 [2024-07-22 10:25:06.156127] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.594 Malloc1 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.594 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.595 [2024-07-22 10:25:06.279964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.595 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:00.854 { 00:09:00.854 "name": "Malloc1", 00:09:00.854 "aliases": [ 00:09:00.854 "4b6400f0-fade-4958-9ec0-68d65c6b5779" 00:09:00.854 ], 00:09:00.854 "product_name": "Malloc disk", 00:09:00.854 "block_size": 512, 00:09:00.854 "num_blocks": 1048576, 00:09:00.854 "uuid": "4b6400f0-fade-4958-9ec0-68d65c6b5779", 00:09:00.854 "assigned_rate_limits": { 00:09:00.854 "rw_ios_per_sec": 0, 00:09:00.854 "rw_mbytes_per_sec": 0, 00:09:00.854 "r_mbytes_per_sec": 0, 00:09:00.854 "w_mbytes_per_sec": 0 00:09:00.854 }, 00:09:00.854 "claimed": true, 00:09:00.854 "claim_type": "exclusive_write", 00:09:00.854 "zoned": false, 00:09:00.854 "supported_io_types": { 00:09:00.854 "read": true, 00:09:00.854 "write": true, 00:09:00.854 "unmap": true, 00:09:00.854 "flush": true, 00:09:00.854 "reset": true, 00:09:00.854 "nvme_admin": false, 00:09:00.854 "nvme_io": false, 00:09:00.854 "nvme_io_md": false, 00:09:00.854 "write_zeroes": true, 00:09:00.854 "zcopy": true, 00:09:00.854 "get_zone_info": false, 00:09:00.854 "zone_management": false, 00:09:00.854 "zone_append": false, 00:09:00.854 "compare": false, 00:09:00.854 "compare_and_write": false, 00:09:00.854 "abort": true, 00:09:00.854 "seek_hole": false, 00:09:00.854 "seek_data": false, 00:09:00.854 "copy": true, 00:09:00.854 "nvme_iov_md": false 00:09:00.854 }, 00:09:00.854 "memory_domains": [ 00:09:00.854 { 00:09:00.854 "dma_device_id": "system", 00:09:00.854 "dma_device_type": 1 00:09:00.854 }, 00:09:00.854 { 00:09:00.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.854 "dma_device_type": 2 00:09:00.854 } 00:09:00.854 ], 00:09:00.854 "driver_specific": {} 00:09:00.854 } 00:09:00.854 ]' 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:00.854 10:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.233 10:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.233 10:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:02.233 10:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.233 10:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:02.233 10:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:04.773 10:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:04.773 10:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:04.773 10:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 ************************************ 00:09:05.714 START TEST filesystem_in_capsule_ext4 00:09:05.714 ************************************ 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:05.714 mke2fs 1.46.5 (30-Dec-2021) 00:09:05.714 Discarding device blocks: 0/522240 done 00:09:05.714 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:05.714 Filesystem UUID: 1ce04ca8-391a-4d06-9eb0-a187d2ce58a4 00:09:05.714 Superblock backups stored on blocks: 00:09:05.714 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:05.714 00:09:05.714 Allocating group tables: 0/64 done 00:09:05.714 Writing inode tables: 0/64 done 00:09:05.714 Creating journal (8192 blocks): done 00:09:05.714 Writing superblocks and filesystem accounting information: 0/64 done 00:09:05.714 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:05.714 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1767611 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:06.285 00:09:06.285 real 0m0.645s 00:09:06.285 user 0m0.023s 00:09:06.285 sys 0m0.051s 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:06.285 ************************************ 00:09:06.285 END TEST filesystem_in_capsule_ext4 00:09:06.285 ************************************ 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:06.285 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.286 ************************************ 00:09:06.286 START TEST filesystem_in_capsule_btrfs 00:09:06.286 ************************************ 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:06.286 10:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:06.546 btrfs-progs v6.6.2 00:09:06.546 See https://btrfs.readthedocs.io for more information. 00:09:06.546 00:09:06.546 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:06.546 NOTE: several default settings have changed in version 5.15, please make sure 00:09:06.546 this does not affect your deployments: 00:09:06.546 - DUP for metadata (-m dup) 00:09:06.546 - enabled no-holes (-O no-holes) 00:09:06.546 - enabled free-space-tree (-R free-space-tree) 00:09:06.546 00:09:06.546 Label: (null) 00:09:06.546 UUID: b93f93e7-bcfe-4817-a3b6-2f380dd80f17 00:09:06.546 Node size: 16384 00:09:06.546 Sector size: 4096 00:09:06.546 Filesystem size: 510.00MiB 00:09:06.546 Block group profiles: 00:09:06.546 Data: single 8.00MiB 00:09:06.546 Metadata: DUP 32.00MiB 00:09:06.546 System: DUP 8.00MiB 00:09:06.546 SSD detected: yes 00:09:06.546 Zoned device: no 00:09:06.546 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:06.546 Runtime features: free-space-tree 00:09:06.546 Checksum: crc32c 00:09:06.546 Number of devices: 1 00:09:06.546 Devices: 00:09:06.546 ID SIZE PATH 00:09:06.546 1 510.00MiB /dev/nvme0n1p1 00:09:06.546 00:09:06.546 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:06.546 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:07.129 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:07.129 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:07.129 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1767611 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:07.389 00:09:07.389 real 0m0.996s 00:09:07.389 user 0m0.026s 00:09:07.389 sys 0m0.063s 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:07.389 ************************************ 00:09:07.389 END TEST filesystem_in_capsule_btrfs 00:09:07.389 ************************************ 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.389 ************************************ 00:09:07.389 START TEST filesystem_in_capsule_xfs 00:09:07.389 ************************************ 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:07.389 10:25:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:07.389 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:07.389 = sectsz=512 attr=2, projid32bit=1 00:09:07.389 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:07.389 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:07.389 data = bsize=4096 blocks=130560, imaxpct=25 00:09:07.389 = sunit=0 swidth=0 blks 00:09:07.389 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:07.389 log =internal log bsize=4096 blocks=16384, version=2 00:09:07.389 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:07.389 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:08.327 Discarding blocks...Done. 00:09:08.327 10:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:08.327 10:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.872 10:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.872 10:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:10.872 10:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.872 10:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1767611 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.872 00:09:10.872 real 0m3.093s 00:09:10.872 user 0m0.034s 00:09:10.872 sys 0m0.046s 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:10.872 ************************************ 00:09:10.872 END TEST filesystem_in_capsule_xfs 00:09:10.872 ************************************ 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:10.872 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1767611 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1767611 ']' 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1767611 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1767611 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1767611' 00:09:11.133 killing process with pid 1767611 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1767611 00:09:11.133 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1767611 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:11.394 00:09:11.394 real 0m11.605s 00:09:11.394 user 0m45.729s 00:09:11.394 sys 0m1.062s 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:11.394 ************************************ 00:09:11.394 END TEST nvmf_filesystem_in_capsule 00:09:11.394 ************************************ 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.394 rmmod nvme_tcp 00:09:11.394 rmmod nvme_fabrics 00:09:11.394 rmmod nvme_keyring 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.394 10:25:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.939 10:25:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.939 00:09:13.939 real 0m35.049s 00:09:13.939 user 1m37.672s 00:09:13.939 sys 0m8.507s 00:09:13.939 10:25:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.939 10:25:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 ************************************ 00:09:13.939 END TEST nvmf_filesystem 00:09:13.939 ************************************ 00:09:13.939 10:25:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.939 10:25:19 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:13.939 10:25:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.939 10:25:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.939 10:25:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 ************************************ 00:09:13.939 START TEST nvmf_target_discovery 00:09:13.939 ************************************ 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:13.939 * Looking for test storage... 00:09:13.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.939 10:25:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:22.093 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:22.093 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:22.093 Found net devices under 0000:31:00.0: cvl_0_0 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:22.093 Found net devices under 0000:31:00.1: cvl_0_1 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:09:22.093 00:09:22.093 --- 10.0.0.2 ping statistics --- 00:09:22.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.093 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:22.093 00:09:22.093 --- 10.0.0.1 ping statistics --- 00:09:22.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.093 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1774849 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1774849 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1774849 ']' 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.093 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.094 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.094 10:25:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 [2024-07-22 10:25:27.491118] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:09:22.094 [2024-07-22 10:25:27.491172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.094 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.094 [2024-07-22 10:25:27.565240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.094 [2024-07-22 10:25:27.598723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.094 [2024-07-22 10:25:27.598762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.094 [2024-07-22 10:25:27.598770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.094 [2024-07-22 10:25:27.598777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.094 [2024-07-22 10:25:27.598783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.094 [2024-07-22 10:25:27.598923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.094 [2024-07-22 10:25:27.599039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.094 [2024-07-22 10:25:27.599195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.094 [2024-07-22 10:25:27.599196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.665 [2024-07-22 10:25:28.313142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.665 Null1 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.665 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.925 [2024-07-22 10:25:28.373514] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:22.925 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 Null2 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 Null3 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 Null4 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.926 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:09:23.187 00:09:23.187 Discovery Log Number of Records 6, Generation counter 6 00:09:23.187 =====Discovery Log Entry 0====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: current discovery subsystem 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4420 00:09:23.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: explicit discovery connections, duplicate discovery information 00:09:23.187 sectype: none 00:09:23.187 =====Discovery Log Entry 1====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: nvme subsystem 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4420 00:09:23.187 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: none 00:09:23.187 sectype: none 00:09:23.187 =====Discovery Log Entry 2====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: nvme subsystem 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4420 00:09:23.187 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: none 00:09:23.187 sectype: none 00:09:23.187 =====Discovery Log Entry 3====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: nvme subsystem 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4420 00:09:23.187 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: none 00:09:23.187 sectype: none 00:09:23.187 =====Discovery Log Entry 4====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: nvme subsystem 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4420 00:09:23.187 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: none 00:09:23.187 sectype: none 00:09:23.187 =====Discovery Log Entry 5====== 00:09:23.187 trtype: tcp 00:09:23.187 adrfam: ipv4 00:09:23.187 subtype: discovery subsystem referral 00:09:23.187 treq: not required 00:09:23.187 portid: 0 00:09:23.187 trsvcid: 4430 00:09:23.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:23.187 traddr: 10.0.0.2 00:09:23.187 eflags: none 00:09:23.187 sectype: none 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:23.187 Perform nvmf subsystem discovery via RPC 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 [ 00:09:23.187 { 00:09:23.187 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:23.187 "subtype": "Discovery", 00:09:23.187 "listen_addresses": [ 00:09:23.187 { 00:09:23.187 "trtype": "TCP", 00:09:23.187 "adrfam": "IPv4", 00:09:23.187 "traddr": "10.0.0.2", 00:09:23.187 "trsvcid": "4420" 00:09:23.187 } 00:09:23.187 ], 00:09:23.187 "allow_any_host": true, 00:09:23.187 "hosts": [] 00:09:23.187 }, 00:09:23.187 { 00:09:23.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.187 "subtype": "NVMe", 00:09:23.187 "listen_addresses": [ 00:09:23.187 { 00:09:23.187 "trtype": "TCP", 00:09:23.187 "adrfam": "IPv4", 00:09:23.187 "traddr": "10.0.0.2", 00:09:23.187 "trsvcid": "4420" 00:09:23.187 } 00:09:23.187 ], 00:09:23.187 "allow_any_host": true, 00:09:23.187 "hosts": [], 00:09:23.187 "serial_number": "SPDK00000000000001", 00:09:23.187 "model_number": "SPDK bdev Controller", 00:09:23.187 "max_namespaces": 32, 00:09:23.187 "min_cntlid": 1, 00:09:23.187 "max_cntlid": 65519, 00:09:23.187 "namespaces": [ 00:09:23.187 { 00:09:23.187 "nsid": 1, 00:09:23.187 "bdev_name": "Null1", 00:09:23.187 "name": "Null1", 00:09:23.187 "nguid": "46E597B8B99843FFAABF404FFAFC7075", 00:09:23.187 "uuid": "46e597b8-b998-43ff-aabf-404ffafc7075" 00:09:23.187 } 00:09:23.187 ] 00:09:23.187 }, 00:09:23.187 { 00:09:23.187 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:23.187 "subtype": "NVMe", 00:09:23.187 "listen_addresses": [ 00:09:23.187 { 00:09:23.187 "trtype": "TCP", 00:09:23.187 "adrfam": "IPv4", 00:09:23.187 "traddr": "10.0.0.2", 00:09:23.187 "trsvcid": "4420" 00:09:23.187 } 00:09:23.187 ], 00:09:23.187 "allow_any_host": true, 00:09:23.187 "hosts": [], 00:09:23.187 "serial_number": "SPDK00000000000002", 00:09:23.187 "model_number": "SPDK bdev Controller", 00:09:23.187 "max_namespaces": 32, 00:09:23.187 "min_cntlid": 1, 00:09:23.187 "max_cntlid": 65519, 00:09:23.187 "namespaces": [ 00:09:23.187 { 00:09:23.187 "nsid": 1, 00:09:23.187 "bdev_name": "Null2", 00:09:23.187 "name": "Null2", 00:09:23.187 "nguid": "28FFF574E3EC48F796CC1D2F8C23F808", 00:09:23.187 "uuid": "28fff574-e3ec-48f7-96cc-1d2f8c23f808" 00:09:23.187 } 00:09:23.187 ] 00:09:23.187 }, 00:09:23.187 { 00:09:23.187 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:23.187 "subtype": "NVMe", 00:09:23.187 "listen_addresses": [ 00:09:23.187 { 00:09:23.187 "trtype": "TCP", 00:09:23.187 "adrfam": "IPv4", 00:09:23.187 "traddr": "10.0.0.2", 00:09:23.187 "trsvcid": "4420" 00:09:23.187 } 00:09:23.187 ], 00:09:23.187 "allow_any_host": true, 00:09:23.187 "hosts": [], 00:09:23.187 "serial_number": "SPDK00000000000003", 00:09:23.187 "model_number": "SPDK bdev Controller", 00:09:23.187 "max_namespaces": 32, 00:09:23.187 "min_cntlid": 1, 00:09:23.187 "max_cntlid": 65519, 00:09:23.187 "namespaces": [ 00:09:23.187 { 00:09:23.187 "nsid": 1, 00:09:23.187 "bdev_name": "Null3", 00:09:23.187 "name": "Null3", 00:09:23.187 "nguid": "96DB2EF9D8A24368B2BBA660B2626B61", 00:09:23.187 "uuid": "96db2ef9-d8a2-4368-b2bb-a660b2626b61" 00:09:23.187 } 00:09:23.187 ] 00:09:23.187 }, 00:09:23.187 { 00:09:23.187 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:23.187 "subtype": "NVMe", 00:09:23.187 "listen_addresses": [ 00:09:23.187 { 00:09:23.187 "trtype": "TCP", 00:09:23.187 "adrfam": "IPv4", 00:09:23.187 "traddr": "10.0.0.2", 00:09:23.187 "trsvcid": "4420" 00:09:23.187 } 00:09:23.187 ], 00:09:23.187 "allow_any_host": true, 00:09:23.187 "hosts": [], 00:09:23.187 "serial_number": "SPDK00000000000004", 00:09:23.187 "model_number": "SPDK bdev Controller", 00:09:23.187 "max_namespaces": 32, 00:09:23.187 "min_cntlid": 1, 00:09:23.187 "max_cntlid": 65519, 00:09:23.187 "namespaces": [ 00:09:23.187 { 00:09:23.187 "nsid": 1, 00:09:23.187 "bdev_name": "Null4", 00:09:23.187 "name": "Null4", 00:09:23.187 "nguid": "6F4FDC3768B1431E8397D64952460339", 00:09:23.187 "uuid": "6f4fdc37-68b1-431e-8397-d64952460339" 00:09:23.187 } 00:09:23.187 ] 00:09:23.187 } 00:09:23.187 ] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:23.187 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.188 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.188 rmmod nvme_tcp 00:09:23.448 rmmod nvme_fabrics 00:09:23.448 rmmod nvme_keyring 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1774849 ']' 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1774849 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1774849 ']' 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1774849 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774849 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774849' 00:09:23.448 killing process with pid 1774849 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1774849 00:09:23.448 10:25:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1774849 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.448 10:25:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.992 10:25:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.992 00:09:25.992 real 0m12.051s 00:09:25.992 user 0m8.474s 00:09:25.992 sys 0m6.417s 00:09:25.992 10:25:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.992 10:25:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:25.992 ************************************ 00:09:25.992 END TEST nvmf_target_discovery 00:09:25.992 ************************************ 00:09:25.992 10:25:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:25.992 10:25:31 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:25.992 10:25:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.992 10:25:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.992 10:25:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.992 ************************************ 00:09:25.992 START TEST nvmf_referrals 00:09:25.992 ************************************ 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:25.992 * Looking for test storage... 00:09:25.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.992 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.993 10:25:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:34.139 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:34.139 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:34.139 Found net devices under 0000:31:00.0: cvl_0_0 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:34.139 Found net devices under 0000:31:00.1: cvl_0_1 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:34.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:09:34.139 00:09:34.139 --- 10.0.0.2 ping statistics --- 00:09:34.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.139 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:09:34.139 00:09:34.139 --- 10.0.0.1 ping statistics --- 00:09:34.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.139 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1779887 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1779887 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1779887 ']' 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.139 10:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.139 [2024-07-22 10:25:39.460959] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:09:34.139 [2024-07-22 10:25:39.461022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.139 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.139 [2024-07-22 10:25:39.538196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.139 [2024-07-22 10:25:39.578867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.139 [2024-07-22 10:25:39.578906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.139 [2024-07-22 10:25:39.578913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.139 [2024-07-22 10:25:39.578920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.139 [2024-07-22 10:25:39.578925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.139 [2024-07-22 10:25:39.579064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.139 [2024-07-22 10:25:39.579185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.139 [2024-07-22 10:25:39.579343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.139 [2024-07-22 10:25:39.579345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.710 [2024-07-22 10:25:40.276296] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.710 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 [2024-07-22 10:25:40.292501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.711 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:34.973 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:35.233 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:35.234 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.494 10:25:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:35.494 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:35.754 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.015 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.276 rmmod nvme_tcp 00:09:36.276 rmmod nvme_fabrics 00:09:36.276 rmmod nvme_keyring 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1779887 ']' 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1779887 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1779887 ']' 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1779887 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779887 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779887' 00:09:36.276 killing process with pid 1779887 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1779887 00:09:36.276 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1779887 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.536 10:25:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.448 10:25:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.448 00:09:38.448 real 0m12.789s 00:09:38.448 user 0m12.699s 00:09:38.448 sys 0m6.505s 00:09:38.448 10:25:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.448 10:25:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:38.448 ************************************ 00:09:38.448 END TEST nvmf_referrals 00:09:38.448 ************************************ 00:09:38.448 10:25:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:38.448 10:25:44 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:38.448 10:25:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.448 10:25:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.448 10:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.448 ************************************ 00:09:38.448 START TEST nvmf_connect_disconnect 00:09:38.448 ************************************ 00:09:38.448 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:38.708 * Looking for test storage... 00:09:38.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.708 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.709 10:25:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.918 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:46.919 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:46.919 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:46.919 Found net devices under 0000:31:00.0: cvl_0_0 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:46.919 Found net devices under 0000:31:00.1: cvl_0_1 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:09:46.919 00:09:46.919 --- 10.0.0.2 ping statistics --- 00:09:46.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.919 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:46.919 00:09:46.919 --- 10.0.0.1 ping statistics --- 00:09:46.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.919 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1785336 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1785336 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1785336 ']' 00:09:46.919 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.180 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.180 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.180 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.180 10:25:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.180 [2024-07-22 10:25:52.666041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:09:47.180 [2024-07-22 10:25:52.666104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.180 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.180 [2024-07-22 10:25:52.742215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.180 [2024-07-22 10:25:52.782720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.180 [2024-07-22 10:25:52.782761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.180 [2024-07-22 10:25:52.782768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.180 [2024-07-22 10:25:52.782776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.180 [2024-07-22 10:25:52.782781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.180 [2024-07-22 10:25:52.782930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.180 [2024-07-22 10:25:52.783053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.180 [2024-07-22 10:25:52.783210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.180 [2024-07-22 10:25:52.783211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 [2024-07-22 10:25:53.495162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:48.117 [2024-07-22 10:25:53.554494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:48.117 10:25:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:50.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.050 rmmod nvme_tcp 00:13:37.050 rmmod nvme_fabrics 00:13:37.050 rmmod nvme_keyring 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1785336 ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1785336 ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785336' 00:13:37.050 killing process with pid 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1785336 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.050 10:29:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.589 10:29:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.589 00:13:39.589 real 4m0.635s 00:13:39.589 user 15m14.376s 00:13:39.589 sys 0m19.809s 00:13:39.589 10:29:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.589 10:29:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:39.589 ************************************ 00:13:39.589 END TEST nvmf_connect_disconnect 00:13:39.589 ************************************ 00:13:39.589 10:29:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.589 10:29:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:39.589 10:29:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.589 10:29:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.589 10:29:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.589 ************************************ 00:13:39.589 START TEST nvmf_multitarget 00:13:39.589 ************************************ 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:39.589 * Looking for test storage... 00:13:39.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.589 10:29:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:47.724 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:47.724 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:47.724 Found net devices under 0000:31:00.0: cvl_0_0 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:47.724 Found net devices under 0000:31:00.1: cvl_0_1 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.724 10:29:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.724 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.724 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.724 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:13:47.724 00:13:47.724 --- 10.0.0.2 ping statistics --- 00:13:47.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.725 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:13:47.725 00:13:47.725 --- 10.0.0.1 ping statistics --- 00:13:47.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.725 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1837010 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1837010 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1837010 ']' 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.725 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 [2024-07-22 10:29:53.205127] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:13:47.725 [2024-07-22 10:29:53.205191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.725 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.725 [2024-07-22 10:29:53.282376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.725 [2024-07-22 10:29:53.322456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.725 [2024-07-22 10:29:53.322496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.725 [2024-07-22 10:29:53.322504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.725 [2024-07-22 10:29:53.322511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.725 [2024-07-22 10:29:53.322517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.725 [2024-07-22 10:29:53.322601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.725 [2024-07-22 10:29:53.322720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.725 [2024-07-22 10:29:53.322878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.725 [2024-07-22 10:29:53.322879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.295 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.295 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:48.295 10:29:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.295 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.296 10:29:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:48.556 "nvmf_tgt_1" 00:13:48.556 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:48.817 "nvmf_tgt_2" 00:13:48.817 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:48.817 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:48.817 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:48.817 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:48.817 true 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:49.078 true 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.078 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.078 rmmod nvme_tcp 00:13:49.078 rmmod nvme_fabrics 00:13:49.078 rmmod nvme_keyring 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1837010 ']' 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1837010 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1837010 ']' 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1837010 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.338 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1837010 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1837010' 00:13:49.339 killing process with pid 1837010 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1837010 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1837010 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.339 10:29:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.376 10:29:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.376 00:13:51.376 real 0m12.187s 00:13:51.376 user 0m9.654s 00:13:51.376 sys 0m6.434s 00:13:51.376 10:29:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.376 10:29:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:51.376 ************************************ 00:13:51.376 END TEST nvmf_multitarget 00:13:51.376 ************************************ 00:13:51.641 10:29:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:51.641 10:29:57 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:51.641 10:29:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.641 10:29:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.641 10:29:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.641 ************************************ 00:13:51.641 START TEST nvmf_rpc 00:13:51.641 ************************************ 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:51.641 * Looking for test storage... 00:13:51.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.641 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.642 10:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:59.809 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:59.809 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:59.809 Found net devices under 0000:31:00.0: cvl_0_0 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:59.809 Found net devices under 0000:31:00.1: cvl_0_1 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:13:59.809 00:13:59.809 --- 10.0.0.2 ping statistics --- 00:13:59.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.809 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:59.809 00:13:59.809 --- 10.0.0.1 ping statistics --- 00:13:59.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.809 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.809 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1842183 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1842183 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1842183 ']' 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.070 10:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.070 [2024-07-22 10:30:05.579295] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:14:00.070 [2024-07-22 10:30:05.579358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.070 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.070 [2024-07-22 10:30:05.657047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.070 [2024-07-22 10:30:05.697362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.070 [2024-07-22 10:30:05.697407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.070 [2024-07-22 10:30:05.697416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.070 [2024-07-22 10:30:05.697422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.070 [2024-07-22 10:30:05.697428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.070 [2024-07-22 10:30:05.697504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.070 [2024-07-22 10:30:05.697614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.070 [2024-07-22 10:30:05.697770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.070 [2024-07-22 10:30:05.697771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:01.011 "tick_rate": 2400000000, 00:14:01.011 "poll_groups": [ 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_000", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_001", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_002", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_003", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [] 00:14:01.011 } 00:14:01.011 ] 00:14:01.011 }' 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 [2024-07-22 10:30:06.510497] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.011 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:01.011 "tick_rate": 2400000000, 00:14:01.011 "poll_groups": [ 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_000", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [ 00:14:01.011 { 00:14:01.011 "trtype": "TCP" 00:14:01.011 } 00:14:01.011 ] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_001", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [ 00:14:01.011 { 00:14:01.011 "trtype": "TCP" 00:14:01.011 } 00:14:01.011 ] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_002", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.011 "current_io_qpairs": 0, 00:14:01.011 "pending_bdev_io": 0, 00:14:01.011 "completed_nvme_io": 0, 00:14:01.011 "transports": [ 00:14:01.011 { 00:14:01.011 "trtype": "TCP" 00:14:01.011 } 00:14:01.011 ] 00:14:01.011 }, 00:14:01.011 { 00:14:01.011 "name": "nvmf_tgt_poll_group_003", 00:14:01.011 "admin_qpairs": 0, 00:14:01.011 "io_qpairs": 0, 00:14:01.011 "current_admin_qpairs": 0, 00:14:01.012 "current_io_qpairs": 0, 00:14:01.012 "pending_bdev_io": 0, 00:14:01.012 "completed_nvme_io": 0, 00:14:01.012 "transports": [ 00:14:01.012 { 00:14:01.012 "trtype": "TCP" 00:14:01.012 } 00:14:01.012 ] 00:14:01.012 } 00:14:01.012 ] 00:14:01.012 }' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.012 Malloc1 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.012 [2024-07-22 10:30:06.698290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.012 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:01.272 [2024-07-22 10:30:06.724901] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:01.272 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:01.272 could not add new controller: failed to write to nvme-fabrics device 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.272 10:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.654 10:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.654 10:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.654 10:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.654 10:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:02.654 10:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:04.561 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.821 [2024-07-22 10:30:10.381217] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:04.821 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:04.821 could not add new controller: failed to write to nvme-fabrics device 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.821 10:30:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.203 10:30:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.203 10:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.203 10:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.203 10:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.203 10:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.763 10:30:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.763 [2024-07-22 10:30:14.027349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.763 10:30:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.146 10:30:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.146 10:30:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.146 10:30:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.146 10:30:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:10.146 10:30:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 [2024-07-22 10:30:17.685059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.058 10:30:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.976 10:30:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.976 10:30:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.976 10:30:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.976 10:30:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:13.976 10:30:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:15.905 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 [2024-07-22 10:30:21.387115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.906 10:30:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.289 10:30:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.289 10:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.289 10:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.289 10:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:17.289 10:30:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:19.826 10:30:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 [2024-07-22 10:30:25.093295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.826 10:30:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.210 10:30:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.210 10:30:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:21.210 10:30:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.210 10:30:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:21.210 10:30:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 [2024-07-22 10:30:28.798536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.119 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.379 10:30:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.379 10:30:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.761 10:30:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.761 10:30:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.761 10:30:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.761 10:30:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:24.761 10:30:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:26.674 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 [2024-07-22 10:30:32.477764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 [2024-07-22 10:30:32.537883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.935 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 [2024-07-22 10:30:32.602064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.936 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.197 [2024-07-22 10:30:32.662269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.197 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 [2024-07-22 10:30:32.718461] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:27.198 "tick_rate": 2400000000, 00:14:27.198 "poll_groups": [ 00:14:27.198 { 00:14:27.198 "name": "nvmf_tgt_poll_group_000", 00:14:27.198 "admin_qpairs": 0, 00:14:27.198 "io_qpairs": 224, 00:14:27.198 "current_admin_qpairs": 0, 00:14:27.198 "current_io_qpairs": 0, 00:14:27.198 "pending_bdev_io": 0, 00:14:27.198 "completed_nvme_io": 319, 00:14:27.198 "transports": [ 00:14:27.198 { 00:14:27.198 "trtype": "TCP" 00:14:27.198 } 00:14:27.198 ] 00:14:27.198 }, 00:14:27.198 { 00:14:27.198 "name": "nvmf_tgt_poll_group_001", 00:14:27.198 "admin_qpairs": 1, 00:14:27.198 "io_qpairs": 223, 00:14:27.198 "current_admin_qpairs": 0, 00:14:27.198 "current_io_qpairs": 0, 00:14:27.198 "pending_bdev_io": 0, 00:14:27.198 "completed_nvme_io": 475, 00:14:27.198 "transports": [ 00:14:27.198 { 00:14:27.198 "trtype": "TCP" 00:14:27.198 } 00:14:27.198 ] 00:14:27.198 }, 00:14:27.198 { 00:14:27.198 "name": "nvmf_tgt_poll_group_002", 00:14:27.198 "admin_qpairs": 6, 00:14:27.198 "io_qpairs": 218, 00:14:27.198 "current_admin_qpairs": 0, 00:14:27.198 "current_io_qpairs": 0, 00:14:27.198 "pending_bdev_io": 0, 00:14:27.198 "completed_nvme_io": 220, 00:14:27.198 "transports": [ 00:14:27.198 { 00:14:27.198 "trtype": "TCP" 00:14:27.198 } 00:14:27.198 ] 00:14:27.198 }, 00:14:27.198 { 00:14:27.198 "name": "nvmf_tgt_poll_group_003", 00:14:27.198 "admin_qpairs": 0, 00:14:27.198 "io_qpairs": 224, 00:14:27.198 "current_admin_qpairs": 0, 00:14:27.198 "current_io_qpairs": 0, 00:14:27.198 "pending_bdev_io": 0, 00:14:27.198 "completed_nvme_io": 225, 00:14:27.198 "transports": [ 00:14:27.198 { 00:14:27.198 "trtype": "TCP" 00:14:27.198 } 00:14:27.198 ] 00:14:27.198 } 00:14:27.198 ] 00:14:27.198 }' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.198 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.198 rmmod nvme_tcp 00:14:27.460 rmmod nvme_fabrics 00:14:27.460 rmmod nvme_keyring 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1842183 ']' 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1842183 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1842183 ']' 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1842183 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1842183 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1842183' 00:14:27.460 killing process with pid 1842183 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1842183 00:14:27.460 10:30:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1842183 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.460 10:30:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.009 10:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.009 00:14:30.009 real 0m38.068s 00:14:30.009 user 1m51.843s 00:14:30.009 sys 0m7.768s 00:14:30.009 10:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.009 10:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.009 ************************************ 00:14:30.009 END TEST nvmf_rpc 00:14:30.009 ************************************ 00:14:30.009 10:30:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.009 10:30:35 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:30.009 10:30:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.009 10:30:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.009 10:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.009 ************************************ 00:14:30.009 START TEST nvmf_invalid 00:14:30.009 ************************************ 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:30.009 * Looking for test storage... 00:14:30.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.009 10:30:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.254 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:38.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:38.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:38.255 Found net devices under 0000:31:00.0: cvl_0_0 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:38.255 Found net devices under 0000:31:00.1: cvl_0_1 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:14:38.255 00:14:38.255 --- 10.0.0.2 ping statistics --- 00:14:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.255 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:38.255 00:14:38.255 --- 10.0.0.1 ping statistics --- 00:14:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.255 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1852846 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1852846 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1852846 ']' 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.255 10:30:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.255 [2024-07-22 10:30:43.659047] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:14:38.255 [2024-07-22 10:30:43.659109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.255 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.255 [2024-07-22 10:30:43.736807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.256 [2024-07-22 10:30:43.777997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.256 [2024-07-22 10:30:43.778037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.256 [2024-07-22 10:30:43.778046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.256 [2024-07-22 10:30:43.778052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.256 [2024-07-22 10:30:43.778058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.256 [2024-07-22 10:30:43.778195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.256 [2024-07-22 10:30:43.778320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.256 [2024-07-22 10:30:43.778466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.256 [2024-07-22 10:30:43.778466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.826 10:30:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.826 10:30:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:38.826 10:30:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.827 10:30:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.827 10:30:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.827 10:30:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.827 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:38.827 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode77 00:14:39.087 [2024-07-22 10:30:44.628502] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:39.087 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:39.087 { 00:14:39.087 "nqn": "nqn.2016-06.io.spdk:cnode77", 00:14:39.087 "tgt_name": "foobar", 00:14:39.087 "method": "nvmf_create_subsystem", 00:14:39.087 "req_id": 1 00:14:39.087 } 00:14:39.087 Got JSON-RPC error response 00:14:39.087 response: 00:14:39.087 { 00:14:39.087 "code": -32603, 00:14:39.087 "message": "Unable to find target foobar" 00:14:39.087 }' 00:14:39.087 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:39.087 { 00:14:39.087 "nqn": "nqn.2016-06.io.spdk:cnode77", 00:14:39.087 "tgt_name": "foobar", 00:14:39.087 "method": "nvmf_create_subsystem", 00:14:39.087 "req_id": 1 00:14:39.087 } 00:14:39.087 Got JSON-RPC error response 00:14:39.087 response: 00:14:39.087 { 00:14:39.087 "code": -32603, 00:14:39.087 "message": "Unable to find target foobar" 00:14:39.087 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:39.087 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:39.087 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21582 00:14:39.349 [2024-07-22 10:30:44.805145] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21582: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:39.349 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:39.349 { 00:14:39.349 "nqn": "nqn.2016-06.io.spdk:cnode21582", 00:14:39.349 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.349 "method": "nvmf_create_subsystem", 00:14:39.349 "req_id": 1 00:14:39.349 } 00:14:39.349 Got JSON-RPC error response 00:14:39.349 response: 00:14:39.349 { 00:14:39.349 "code": -32602, 00:14:39.349 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.349 }' 00:14:39.349 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:39.349 { 00:14:39.349 "nqn": "nqn.2016-06.io.spdk:cnode21582", 00:14:39.349 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.349 "method": "nvmf_create_subsystem", 00:14:39.349 "req_id": 1 00:14:39.349 } 00:14:39.349 Got JSON-RPC error response 00:14:39.349 response: 00:14:39.349 { 00:14:39.349 "code": -32602, 00:14:39.349 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.349 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:39.349 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:39.349 10:30:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18849 00:14:39.349 [2024-07-22 10:30:44.977640] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18849: invalid model number 'SPDK_Controller' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:39.349 { 00:14:39.349 "nqn": "nqn.2016-06.io.spdk:cnode18849", 00:14:39.349 "model_number": "SPDK_Controller\u001f", 00:14:39.349 "method": "nvmf_create_subsystem", 00:14:39.349 "req_id": 1 00:14:39.349 } 00:14:39.349 Got JSON-RPC error response 00:14:39.349 response: 00:14:39.349 { 00:14:39.349 "code": -32602, 00:14:39.349 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.349 }' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:39.349 { 00:14:39.349 "nqn": "nqn.2016-06.io.spdk:cnode18849", 00:14:39.349 "model_number": "SPDK_Controller\u001f", 00:14:39.349 "method": "nvmf_create_subsystem", 00:14:39.349 "req_id": 1 00:14:39.349 } 00:14:39.349 Got JSON-RPC error response 00:14:39.349 response: 00:14:39.349 { 00:14:39.349 "code": -32602, 00:14:39.349 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.349 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.349 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:39.611 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:14:39.612 10:30:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ',k?pGs1h /dev/null' 00:14:41.959 10:30:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.502 10:30:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.502 00:14:44.502 real 0m14.319s 00:14:44.502 user 0m19.293s 00:14:44.502 sys 0m7.092s 00:14:44.502 10:30:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.502 10:30:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.502 ************************************ 00:14:44.502 END TEST nvmf_invalid 00:14:44.502 ************************************ 00:14:44.502 10:30:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:44.502 10:30:49 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:44.502 10:30:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.502 10:30:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.502 10:30:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.502 ************************************ 00:14:44.502 START TEST nvmf_abort 00:14:44.502 ************************************ 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:44.502 * Looking for test storage... 00:14:44.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.502 10:30:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:52.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:52.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:52.641 Found net devices under 0000:31:00.0: cvl_0_0 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:52.641 Found net devices under 0000:31:00.1: cvl_0_1 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.641 10:30:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.641 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.641 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.641 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:14:52.641 00:14:52.641 --- 10.0.0.2 ping statistics --- 00:14:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.641 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:14:52.641 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:14:52.641 00:14:52.641 --- 10.0.0.1 ping statistics --- 00:14:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.641 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:14:52.641 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1858441 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1858441 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1858441 ']' 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.642 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:52.642 [2024-07-22 10:30:58.151313] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:14:52.642 [2024-07-22 10:30:58.151382] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.642 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.642 [2024-07-22 10:30:58.247405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.642 [2024-07-22 10:30:58.296038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.642 [2024-07-22 10:30:58.296089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.642 [2024-07-22 10:30:58.296099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.642 [2024-07-22 10:30:58.296105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.642 [2024-07-22 10:30:58.296111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.642 [2024-07-22 10:30:58.296232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.642 [2024-07-22 10:30:58.296399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.642 [2024-07-22 10:30:58.296410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 [2024-07-22 10:30:58.986876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 Malloc0 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 Delay0 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 [2024-07-22 10:30:59.064959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.583 10:30:59 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:53.583 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.583 [2024-07-22 10:30:59.174595] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:56.121 Initializing NVMe Controllers 00:14:56.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:56.121 controller IO queue size 128 less than required 00:14:56.121 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:56.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:56.121 Initialization complete. Launching workers. 00:14:56.121 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34206 00:14:56.121 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34267, failed to submit 62 00:14:56.121 success 34210, unsuccess 57, failed 0 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.121 rmmod nvme_tcp 00:14:56.121 rmmod nvme_fabrics 00:14:56.121 rmmod nvme_keyring 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1858441 ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1858441 ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858441' 00:14:56.121 killing process with pid 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1858441 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.121 10:31:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.034 10:31:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.034 00:14:58.034 real 0m13.868s 00:14:58.034 user 0m13.636s 00:14:58.034 sys 0m6.928s 00:14:58.034 10:31:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.034 10:31:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:58.034 ************************************ 00:14:58.034 END TEST nvmf_abort 00:14:58.034 ************************************ 00:14:58.034 10:31:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.034 10:31:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:58.034 10:31:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.034 10:31:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.034 10:31:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.034 ************************************ 00:14:58.034 START TEST nvmf_ns_hotplug_stress 00:14:58.034 ************************************ 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:58.034 * Looking for test storage... 00:14:58.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.034 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.294 10:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.445 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:06.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:06.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:06.446 Found net devices under 0000:31:00.0: cvl_0_0 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:06.446 Found net devices under 0000:31:00.1: cvl_0_1 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:15:06.446 00:15:06.446 --- 10.0.0.2 ping statistics --- 00:15:06.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.446 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:15:06.446 00:15:06.446 --- 10.0.0.1 ping statistics --- 00:15:06.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.446 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1863729 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1863729 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1863729 ']' 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.446 10:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.446 [2024-07-22 10:31:11.829298] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:15:06.446 [2024-07-22 10:31:11.829363] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.446 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.446 [2024-07-22 10:31:11.924597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.447 [2024-07-22 10:31:11.973169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.447 [2024-07-22 10:31:11.973225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.447 [2024-07-22 10:31:11.973233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.447 [2024-07-22 10:31:11.973240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.447 [2024-07-22 10:31:11.973246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.447 [2024-07-22 10:31:11.973425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.447 [2024-07-22 10:31:11.973569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.447 [2024-07-22 10:31:11.973569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:07.018 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.279 [2024-07-22 10:31:12.779834] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.279 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:07.540 10:31:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.540 [2024-07-22 10:31:13.125770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.540 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:07.800 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:07.800 Malloc0 00:15:07.800 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:08.060 Delay0 00:15:08.060 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.320 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:08.320 NULL1 00:15:08.320 10:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:08.580 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1864360 00:15:08.580 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:08.580 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:08.580 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.580 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.841 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.841 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:08.841 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:09.101 true 00:15:09.101 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:09.101 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.361 10:31:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.361 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:09.361 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:09.621 true 00:15:09.621 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:09.621 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.882 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.882 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:09.882 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:10.142 true 00:15:10.142 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:10.142 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.402 10:31:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.402 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:10.402 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:10.662 true 00:15:10.662 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:10.662 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.922 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.922 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:10.922 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:11.182 true 00:15:11.182 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:11.182 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.442 10:31:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.442 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:11.442 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:11.701 true 00:15:11.701 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:11.701 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.961 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.961 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:11.961 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:12.221 true 00:15:12.221 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:12.221 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.481 10:31:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.481 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:12.481 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:12.741 true 00:15:12.741 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:12.741 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.741 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.002 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:13.002 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:13.261 true 00:15:13.261 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:13.261 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.261 10:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.521 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:13.521 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:13.781 true 00:15:13.781 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:13.781 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.781 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.041 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:14.041 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:14.301 true 00:15:14.301 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:14.301 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.301 10:31:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.561 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:14.561 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:14.821 true 00:15:14.821 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:14.821 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.821 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.081 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:15.081 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:15.341 true 00:15:15.341 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:15.341 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.341 10:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.601 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:15.601 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:15.601 true 00:15:15.861 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:15.861 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.861 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.122 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:16.122 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:16.122 true 00:15:16.383 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:16.383 10:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.383 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.644 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:16.644 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:16.644 true 00:15:16.903 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:16.903 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.903 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.163 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:17.163 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:17.163 true 00:15:17.423 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:17.423 10:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.423 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.683 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:17.683 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:17.683 true 00:15:17.683 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:17.683 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.942 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.202 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:18.202 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:18.202 true 00:15:18.202 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:18.202 10:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.461 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.730 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:18.730 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:18.730 true 00:15:18.730 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:18.730 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.053 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.053 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:19.053 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:19.347 true 00:15:19.347 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:19.347 10:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.606 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.606 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:19.606 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:19.866 true 00:15:19.866 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:19.866 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.126 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.126 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:20.126 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:20.386 true 00:15:20.386 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:20.386 10:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.645 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.645 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:20.645 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:20.905 true 00:15:20.905 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:20.905 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.905 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.165 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:21.165 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:21.424 true 00:15:21.424 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:21.424 10:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.424 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.684 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:21.684 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:21.945 true 00:15:21.945 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:21.945 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.945 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.205 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:22.205 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:22.465 true 00:15:22.465 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:22.465 10:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.466 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.726 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:22.726 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:22.985 true 00:15:22.985 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:22.985 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.985 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.245 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:23.245 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:23.245 true 00:15:23.504 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:23.504 10:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.504 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.764 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:23.764 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:23.764 true 00:15:24.025 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:24.025 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.025 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.286 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:24.286 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:24.286 true 00:15:24.286 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:24.286 10:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.545 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.804 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:24.804 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:24.804 true 00:15:24.804 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:24.804 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.063 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.322 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:25.322 10:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:25.322 true 00:15:25.322 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:25.322 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.582 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.842 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:25.842 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:25.842 true 00:15:25.842 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:25.842 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.101 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.360 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:26.360 10:31:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:26.360 true 00:15:26.360 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:26.360 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.620 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.880 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:26.880 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:26.880 true 00:15:26.880 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:26.880 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.140 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.399 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:15:27.399 10:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:15:27.399 true 00:15:27.399 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:27.399 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.658 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.917 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:15:27.917 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:15:27.917 true 00:15:27.917 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:27.917 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.176 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.435 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:15:28.435 10:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:15:28.435 true 00:15:28.435 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:28.435 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.702 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.962 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:15:28.962 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:15:28.962 true 00:15:28.962 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:28.962 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.223 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.223 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:15:29.223 10:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:15:29.483 true 00:15:29.483 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:29.483 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.743 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.743 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:15:29.743 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:15:30.003 true 00:15:30.003 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:30.003 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.262 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.262 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:15:30.262 10:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:15:30.522 true 00:15:30.522 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:30.522 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.781 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.781 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:30.781 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:31.042 true 00:15:31.042 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:31.042 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.302 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.302 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:31.302 10:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:31.562 true 00:15:31.562 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:31.562 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.822 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.822 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:31.822 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:32.082 true 00:15:32.082 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:32.082 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.342 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.342 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:32.342 10:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:32.601 true 00:15:32.601 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:32.601 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.860 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.860 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:32.860 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:33.120 true 00:15:33.120 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:33.120 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.380 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.380 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:33.380 10:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:33.640 true 00:15:33.640 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:33.640 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.640 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.900 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:33.900 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:34.160 true 00:15:34.160 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:34.160 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.160 10:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.432 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:15:34.432 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:15:34.691 true 00:15:34.691 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:34.691 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.691 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.951 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:15:34.951 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:15:35.212 true 00:15:35.212 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:35.212 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.212 10:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.472 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:15:35.472 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:15:35.732 true 00:15:35.732 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:35.732 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.732 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.992 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:15:35.993 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:15:35.993 true 00:15:36.252 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:36.252 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.252 10:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.513 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:15:36.513 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:15:36.513 true 00:15:36.773 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:36.773 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.773 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.033 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:15:37.033 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:15:37.033 true 00:15:37.293 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:37.293 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.293 10:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.552 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:15:37.552 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:15:37.552 true 00:15:37.552 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:37.552 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.812 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.071 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:15:38.071 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:15:38.071 true 00:15:38.071 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:38.071 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.330 10:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.589 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:15:38.589 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:15:38.589 true 00:15:38.589 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:38.589 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.849 Initializing NVMe Controllers 00:15:38.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.849 Controller IO queue size 128, less than required. 00:15:38.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:38.849 Initialization complete. Launching workers. 00:15:38.849 ======================================================== 00:15:38.849 Latency(us) 00:15:38.849 Device Information : IOPS MiB/s Average min max 00:15:38.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31089.43 15.18 4116.98 1455.03 10272.51 00:15:38.849 ======================================================== 00:15:38.849 Total : 31089.43 15.18 4116.98 1455.03 10272.51 00:15:38.849 00:15:38.849 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.108 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:15:39.109 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:15:39.109 true 00:15:39.109 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1864360 00:15:39.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1864360) - No such process 00:15:39.109 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1864360 00:15:39.109 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.367 10:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:39.367 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:39.367 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:39.367 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:39.367 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:39.367 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:39.625 null0 00:15:39.625 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:39.625 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:39.625 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:39.884 null1 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:39.884 null2 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:39.884 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:40.143 null3 00:15:40.143 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:40.143 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:40.143 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:40.402 null4 00:15:40.402 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:40.402 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:40.402 10:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:40.402 null5 00:15:40.402 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:40.402 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:40.402 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:40.661 null6 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:40.661 null7 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:40.661 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.921 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1870845 1870847 1870850 1870853 1870856 1870859 1870861 1870864 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.922 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.181 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:41.182 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:41.441 10:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.441 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.442 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:41.701 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.960 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.220 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:42.481 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:42.481 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.481 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.481 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:42.481 10:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.481 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.742 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.003 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:43.004 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:43.265 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:43.527 10:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.527 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:43.788 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:44.049 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:44.311 10:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.311 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:44.311 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.311 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.311 rmmod nvme_tcp 00:15:44.571 rmmod nvme_fabrics 00:15:44.571 rmmod nvme_keyring 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1863729 ']' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1863729 ']' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1863729' 00:15:44.571 killing process with pid 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1863729 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.571 10:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.193 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.193 00:15:47.193 real 0m48.696s 00:15:47.193 user 3m15.001s 00:15:47.193 sys 0m17.424s 00:15:47.193 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.193 10:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.193 ************************************ 00:15:47.193 END TEST nvmf_ns_hotplug_stress 00:15:47.193 ************************************ 00:15:47.193 10:31:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:47.193 10:31:52 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:47.193 10:31:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.193 10:31:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.193 10:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.193 ************************************ 00:15:47.193 START TEST nvmf_connect_stress 00:15:47.193 ************************************ 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:47.193 * Looking for test storage... 00:15:47.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.193 10:31:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.194 10:31:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.359 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.359 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.359 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.359 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.359 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:15:55.360 00:15:55.360 --- 10.0.0.2 ping statistics --- 00:15:55.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.360 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:15:55.360 00:15:55.360 --- 10.0.0.1 ping statistics --- 00:15:55.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.360 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1876463 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1876463 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1876463 ']' 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.360 10:32:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.360 [2024-07-22 10:32:00.646074] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:15:55.360 [2024-07-22 10:32:00.646131] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.360 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.360 [2024-07-22 10:32:00.739018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.360 [2024-07-22 10:32:00.785764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.360 [2024-07-22 10:32:00.785819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.360 [2024-07-22 10:32:00.785827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.360 [2024-07-22 10:32:00.785839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.360 [2024-07-22 10:32:00.785845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.360 [2024-07-22 10:32:00.785977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.360 [2024-07-22 10:32:00.786149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.360 [2024-07-22 10:32:00.786149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.931 [2024-07-22 10:32:01.461986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.931 [2024-07-22 10:32:01.507814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.931 NULL1 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1876539 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.931 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.500 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.500 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:56.500 10:32:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.500 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.500 10:32:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.759 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.759 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:56.759 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.759 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.759 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.018 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.018 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:57.018 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.018 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.018 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.277 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:57.277 10:32:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.277 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.277 10:32:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.847 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.847 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:57.847 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.847 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.847 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.106 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.106 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:58.106 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.106 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.106 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.366 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.366 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:58.366 10:32:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.366 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.366 10:32:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.625 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.625 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:58.625 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.625 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.625 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.885 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.885 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:58.885 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.885 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.885 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.453 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.453 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:59.453 10:32:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.453 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.453 10:32:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.712 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:59.712 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.712 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.712 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.971 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.971 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:15:59.971 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.971 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.971 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.230 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:00.230 10:32:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.230 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.230 10:32:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.488 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.488 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:00.488 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.488 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.488 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.056 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:01.056 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.056 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.056 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.315 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.315 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:01.315 10:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.315 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.315 10:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.574 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.574 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:01.574 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.574 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.574 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.833 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.833 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:01.833 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.833 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.833 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.401 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:02.401 10:32:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.401 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.401 10:32:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.660 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.660 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:02.660 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.660 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.660 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.920 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.920 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:02.920 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.920 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.920 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.181 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.181 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:03.181 10:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.181 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.181 10:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.440 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.440 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:03.440 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.440 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.440 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.008 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:04.008 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.008 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.008 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.267 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:04.267 10:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.267 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.267 10:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.526 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.526 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:04.526 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.527 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.527 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.786 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:04.786 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.786 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.786 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.046 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.046 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:05.046 10:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.046 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.046 10:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.617 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.617 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:05.617 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.617 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.617 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.878 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.878 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:05.878 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.878 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.878 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.139 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1876539 00:16:06.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1876539) - No such process 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1876539 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.139 rmmod nvme_tcp 00:16:06.139 rmmod nvme_fabrics 00:16:06.139 rmmod nvme_keyring 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1876463 ']' 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1876463 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1876463 ']' 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1876463 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1876463 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1876463' 00:16:06.139 killing process with pid 1876463 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1876463 00:16:06.139 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1876463 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.399 10:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.940 10:32:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.940 00:16:08.940 real 0m21.622s 00:16:08.940 user 0m42.525s 00:16:08.940 sys 0m9.153s 00:16:08.940 10:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.940 10:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.940 ************************************ 00:16:08.940 END TEST nvmf_connect_stress 00:16:08.940 ************************************ 00:16:08.940 10:32:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.940 10:32:14 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:08.940 10:32:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.940 10:32:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.940 10:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.940 ************************************ 00:16:08.940 START TEST nvmf_fused_ordering 00:16:08.940 ************************************ 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:08.940 * Looking for test storage... 00:16:08.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.940 10:32:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:17.079 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:17.079 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:17.079 Found net devices under 0000:31:00.0: cvl_0_0 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:17.079 Found net devices under 0000:31:00.1: cvl_0_1 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:16:17.079 00:16:17.079 --- 10.0.0.2 ping statistics --- 00:16:17.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.079 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:16:17.079 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:16:17.080 00:16:17.080 --- 10.0.0.1 ping statistics --- 00:16:17.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.080 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1883272 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1883272 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1883272 ']' 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.080 10:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.080 [2024-07-22 10:32:22.442930] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:16:17.080 [2024-07-22 10:32:22.442995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.080 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.080 [2024-07-22 10:32:22.537016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.080 [2024-07-22 10:32:22.583289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.080 [2024-07-22 10:32:22.583343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.080 [2024-07-22 10:32:22.583352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.080 [2024-07-22 10:32:22.583359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.080 [2024-07-22 10:32:22.583365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.080 [2024-07-22 10:32:22.583411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 [2024-07-22 10:32:23.280044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.650 [2024-07-22 10:32:23.304325] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.650 NULL1 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.650 10:32:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:17.909 [2024-07-22 10:32:23.373631] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:16:17.909 [2024-07-22 10:32:23.373688] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883542 ] 00:16:17.909 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.167 Attached to nqn.2016-06.io.spdk:cnode1 00:16:18.167 Namespace ID: 1 size: 1GB 00:16:18.167 fused_ordering(0) 00:16:18.167 fused_ordering(1) 00:16:18.167 fused_ordering(2) 00:16:18.167 fused_ordering(3) 00:16:18.167 fused_ordering(4) 00:16:18.167 fused_ordering(5) 00:16:18.167 fused_ordering(6) 00:16:18.167 fused_ordering(7) 00:16:18.167 fused_ordering(8) 00:16:18.167 fused_ordering(9) 00:16:18.167 fused_ordering(10) 00:16:18.167 fused_ordering(11) 00:16:18.167 fused_ordering(12) 00:16:18.167 fused_ordering(13) 00:16:18.167 fused_ordering(14) 00:16:18.167 fused_ordering(15) 00:16:18.167 fused_ordering(16) 00:16:18.167 fused_ordering(17) 00:16:18.167 fused_ordering(18) 00:16:18.167 fused_ordering(19) 00:16:18.168 fused_ordering(20) 00:16:18.168 fused_ordering(21) 00:16:18.168 fused_ordering(22) 00:16:18.168 fused_ordering(23) 00:16:18.168 fused_ordering(24) 00:16:18.168 fused_ordering(25) 00:16:18.168 fused_ordering(26) 00:16:18.168 fused_ordering(27) 00:16:18.168 fused_ordering(28) 00:16:18.168 fused_ordering(29) 00:16:18.168 fused_ordering(30) 00:16:18.168 fused_ordering(31) 00:16:18.168 fused_ordering(32) 00:16:18.168 fused_ordering(33) 00:16:18.168 fused_ordering(34) 00:16:18.168 fused_ordering(35) 00:16:18.168 fused_ordering(36) 00:16:18.168 fused_ordering(37) 00:16:18.168 fused_ordering(38) 00:16:18.168 fused_ordering(39) 00:16:18.168 fused_ordering(40) 00:16:18.168 fused_ordering(41) 00:16:18.168 fused_ordering(42) 00:16:18.168 fused_ordering(43) 00:16:18.168 fused_ordering(44) 00:16:18.168 fused_ordering(45) 00:16:18.168 fused_ordering(46) 00:16:18.168 fused_ordering(47) 00:16:18.168 fused_ordering(48) 00:16:18.168 fused_ordering(49) 00:16:18.168 fused_ordering(50) 00:16:18.168 fused_ordering(51) 00:16:18.168 fused_ordering(52) 00:16:18.168 fused_ordering(53) 00:16:18.168 fused_ordering(54) 00:16:18.168 fused_ordering(55) 00:16:18.168 fused_ordering(56) 00:16:18.168 fused_ordering(57) 00:16:18.168 fused_ordering(58) 00:16:18.168 fused_ordering(59) 00:16:18.168 fused_ordering(60) 00:16:18.168 fused_ordering(61) 00:16:18.168 fused_ordering(62) 00:16:18.168 fused_ordering(63) 00:16:18.168 fused_ordering(64) 00:16:18.168 fused_ordering(65) 00:16:18.168 fused_ordering(66) 00:16:18.168 fused_ordering(67) 00:16:18.168 fused_ordering(68) 00:16:18.168 fused_ordering(69) 00:16:18.168 fused_ordering(70) 00:16:18.168 fused_ordering(71) 00:16:18.168 fused_ordering(72) 00:16:18.168 fused_ordering(73) 00:16:18.168 fused_ordering(74) 00:16:18.168 fused_ordering(75) 00:16:18.168 fused_ordering(76) 00:16:18.168 fused_ordering(77) 00:16:18.168 fused_ordering(78) 00:16:18.168 fused_ordering(79) 00:16:18.168 fused_ordering(80) 00:16:18.168 fused_ordering(81) 00:16:18.168 fused_ordering(82) 00:16:18.168 fused_ordering(83) 00:16:18.168 fused_ordering(84) 00:16:18.168 fused_ordering(85) 00:16:18.168 fused_ordering(86) 00:16:18.168 fused_ordering(87) 00:16:18.168 fused_ordering(88) 00:16:18.168 fused_ordering(89) 00:16:18.168 fused_ordering(90) 00:16:18.168 fused_ordering(91) 00:16:18.168 fused_ordering(92) 00:16:18.168 fused_ordering(93) 00:16:18.168 fused_ordering(94) 00:16:18.168 fused_ordering(95) 00:16:18.168 fused_ordering(96) 00:16:18.168 fused_ordering(97) 00:16:18.168 fused_ordering(98) 00:16:18.168 fused_ordering(99) 00:16:18.168 fused_ordering(100) 00:16:18.168 fused_ordering(101) 00:16:18.168 fused_ordering(102) 00:16:18.168 fused_ordering(103) 00:16:18.168 fused_ordering(104) 00:16:18.168 fused_ordering(105) 00:16:18.168 fused_ordering(106) 00:16:18.168 fused_ordering(107) 00:16:18.168 fused_ordering(108) 00:16:18.168 fused_ordering(109) 00:16:18.168 fused_ordering(110) 00:16:18.168 fused_ordering(111) 00:16:18.168 fused_ordering(112) 00:16:18.168 fused_ordering(113) 00:16:18.168 fused_ordering(114) 00:16:18.168 fused_ordering(115) 00:16:18.168 fused_ordering(116) 00:16:18.168 fused_ordering(117) 00:16:18.168 fused_ordering(118) 00:16:18.168 fused_ordering(119) 00:16:18.168 fused_ordering(120) 00:16:18.168 fused_ordering(121) 00:16:18.168 fused_ordering(122) 00:16:18.168 fused_ordering(123) 00:16:18.168 fused_ordering(124) 00:16:18.168 fused_ordering(125) 00:16:18.168 fused_ordering(126) 00:16:18.168 fused_ordering(127) 00:16:18.168 fused_ordering(128) 00:16:18.168 fused_ordering(129) 00:16:18.168 fused_ordering(130) 00:16:18.168 fused_ordering(131) 00:16:18.168 fused_ordering(132) 00:16:18.168 fused_ordering(133) 00:16:18.168 fused_ordering(134) 00:16:18.168 fused_ordering(135) 00:16:18.168 fused_ordering(136) 00:16:18.168 fused_ordering(137) 00:16:18.168 fused_ordering(138) 00:16:18.168 fused_ordering(139) 00:16:18.168 fused_ordering(140) 00:16:18.168 fused_ordering(141) 00:16:18.168 fused_ordering(142) 00:16:18.168 fused_ordering(143) 00:16:18.168 fused_ordering(144) 00:16:18.168 fused_ordering(145) 00:16:18.168 fused_ordering(146) 00:16:18.168 fused_ordering(147) 00:16:18.168 fused_ordering(148) 00:16:18.168 fused_ordering(149) 00:16:18.168 fused_ordering(150) 00:16:18.168 fused_ordering(151) 00:16:18.168 fused_ordering(152) 00:16:18.168 fused_ordering(153) 00:16:18.168 fused_ordering(154) 00:16:18.168 fused_ordering(155) 00:16:18.168 fused_ordering(156) 00:16:18.168 fused_ordering(157) 00:16:18.168 fused_ordering(158) 00:16:18.168 fused_ordering(159) 00:16:18.168 fused_ordering(160) 00:16:18.168 fused_ordering(161) 00:16:18.168 fused_ordering(162) 00:16:18.168 fused_ordering(163) 00:16:18.168 fused_ordering(164) 00:16:18.168 fused_ordering(165) 00:16:18.168 fused_ordering(166) 00:16:18.168 fused_ordering(167) 00:16:18.168 fused_ordering(168) 00:16:18.168 fused_ordering(169) 00:16:18.168 fused_ordering(170) 00:16:18.168 fused_ordering(171) 00:16:18.168 fused_ordering(172) 00:16:18.168 fused_ordering(173) 00:16:18.168 fused_ordering(174) 00:16:18.168 fused_ordering(175) 00:16:18.168 fused_ordering(176) 00:16:18.168 fused_ordering(177) 00:16:18.168 fused_ordering(178) 00:16:18.168 fused_ordering(179) 00:16:18.168 fused_ordering(180) 00:16:18.168 fused_ordering(181) 00:16:18.168 fused_ordering(182) 00:16:18.168 fused_ordering(183) 00:16:18.168 fused_ordering(184) 00:16:18.168 fused_ordering(185) 00:16:18.168 fused_ordering(186) 00:16:18.168 fused_ordering(187) 00:16:18.168 fused_ordering(188) 00:16:18.168 fused_ordering(189) 00:16:18.168 fused_ordering(190) 00:16:18.168 fused_ordering(191) 00:16:18.168 fused_ordering(192) 00:16:18.168 fused_ordering(193) 00:16:18.168 fused_ordering(194) 00:16:18.168 fused_ordering(195) 00:16:18.168 fused_ordering(196) 00:16:18.168 fused_ordering(197) 00:16:18.168 fused_ordering(198) 00:16:18.168 fused_ordering(199) 00:16:18.168 fused_ordering(200) 00:16:18.168 fused_ordering(201) 00:16:18.168 fused_ordering(202) 00:16:18.168 fused_ordering(203) 00:16:18.168 fused_ordering(204) 00:16:18.168 fused_ordering(205) 00:16:18.737 fused_ordering(206) 00:16:18.737 fused_ordering(207) 00:16:18.737 fused_ordering(208) 00:16:18.737 fused_ordering(209) 00:16:18.737 fused_ordering(210) 00:16:18.737 fused_ordering(211) 00:16:18.737 fused_ordering(212) 00:16:18.737 fused_ordering(213) 00:16:18.737 fused_ordering(214) 00:16:18.737 fused_ordering(215) 00:16:18.737 fused_ordering(216) 00:16:18.737 fused_ordering(217) 00:16:18.737 fused_ordering(218) 00:16:18.737 fused_ordering(219) 00:16:18.737 fused_ordering(220) 00:16:18.737 fused_ordering(221) 00:16:18.737 fused_ordering(222) 00:16:18.737 fused_ordering(223) 00:16:18.737 fused_ordering(224) 00:16:18.737 fused_ordering(225) 00:16:18.737 fused_ordering(226) 00:16:18.737 fused_ordering(227) 00:16:18.737 fused_ordering(228) 00:16:18.737 fused_ordering(229) 00:16:18.737 fused_ordering(230) 00:16:18.737 fused_ordering(231) 00:16:18.737 fused_ordering(232) 00:16:18.737 fused_ordering(233) 00:16:18.737 fused_ordering(234) 00:16:18.737 fused_ordering(235) 00:16:18.737 fused_ordering(236) 00:16:18.737 fused_ordering(237) 00:16:18.737 fused_ordering(238) 00:16:18.737 fused_ordering(239) 00:16:18.737 fused_ordering(240) 00:16:18.737 fused_ordering(241) 00:16:18.737 fused_ordering(242) 00:16:18.737 fused_ordering(243) 00:16:18.737 fused_ordering(244) 00:16:18.737 fused_ordering(245) 00:16:18.737 fused_ordering(246) 00:16:18.737 fused_ordering(247) 00:16:18.737 fused_ordering(248) 00:16:18.737 fused_ordering(249) 00:16:18.737 fused_ordering(250) 00:16:18.737 fused_ordering(251) 00:16:18.737 fused_ordering(252) 00:16:18.737 fused_ordering(253) 00:16:18.737 fused_ordering(254) 00:16:18.737 fused_ordering(255) 00:16:18.737 fused_ordering(256) 00:16:18.737 fused_ordering(257) 00:16:18.737 fused_ordering(258) 00:16:18.737 fused_ordering(259) 00:16:18.737 fused_ordering(260) 00:16:18.737 fused_ordering(261) 00:16:18.737 fused_ordering(262) 00:16:18.737 fused_ordering(263) 00:16:18.737 fused_ordering(264) 00:16:18.737 fused_ordering(265) 00:16:18.738 fused_ordering(266) 00:16:18.738 fused_ordering(267) 00:16:18.738 fused_ordering(268) 00:16:18.738 fused_ordering(269) 00:16:18.738 fused_ordering(270) 00:16:18.738 fused_ordering(271) 00:16:18.738 fused_ordering(272) 00:16:18.738 fused_ordering(273) 00:16:18.738 fused_ordering(274) 00:16:18.738 fused_ordering(275) 00:16:18.738 fused_ordering(276) 00:16:18.738 fused_ordering(277) 00:16:18.738 fused_ordering(278) 00:16:18.738 fused_ordering(279) 00:16:18.738 fused_ordering(280) 00:16:18.738 fused_ordering(281) 00:16:18.738 fused_ordering(282) 00:16:18.738 fused_ordering(283) 00:16:18.738 fused_ordering(284) 00:16:18.738 fused_ordering(285) 00:16:18.738 fused_ordering(286) 00:16:18.738 fused_ordering(287) 00:16:18.738 fused_ordering(288) 00:16:18.738 fused_ordering(289) 00:16:18.738 fused_ordering(290) 00:16:18.738 fused_ordering(291) 00:16:18.738 fused_ordering(292) 00:16:18.738 fused_ordering(293) 00:16:18.738 fused_ordering(294) 00:16:18.738 fused_ordering(295) 00:16:18.738 fused_ordering(296) 00:16:18.738 fused_ordering(297) 00:16:18.738 fused_ordering(298) 00:16:18.738 fused_ordering(299) 00:16:18.738 fused_ordering(300) 00:16:18.738 fused_ordering(301) 00:16:18.738 fused_ordering(302) 00:16:18.738 fused_ordering(303) 00:16:18.738 fused_ordering(304) 00:16:18.738 fused_ordering(305) 00:16:18.738 fused_ordering(306) 00:16:18.738 fused_ordering(307) 00:16:18.738 fused_ordering(308) 00:16:18.738 fused_ordering(309) 00:16:18.738 fused_ordering(310) 00:16:18.738 fused_ordering(311) 00:16:18.738 fused_ordering(312) 00:16:18.738 fused_ordering(313) 00:16:18.738 fused_ordering(314) 00:16:18.738 fused_ordering(315) 00:16:18.738 fused_ordering(316) 00:16:18.738 fused_ordering(317) 00:16:18.738 fused_ordering(318) 00:16:18.738 fused_ordering(319) 00:16:18.738 fused_ordering(320) 00:16:18.738 fused_ordering(321) 00:16:18.738 fused_ordering(322) 00:16:18.738 fused_ordering(323) 00:16:18.738 fused_ordering(324) 00:16:18.738 fused_ordering(325) 00:16:18.738 fused_ordering(326) 00:16:18.738 fused_ordering(327) 00:16:18.738 fused_ordering(328) 00:16:18.738 fused_ordering(329) 00:16:18.738 fused_ordering(330) 00:16:18.738 fused_ordering(331) 00:16:18.738 fused_ordering(332) 00:16:18.738 fused_ordering(333) 00:16:18.738 fused_ordering(334) 00:16:18.738 fused_ordering(335) 00:16:18.738 fused_ordering(336) 00:16:18.738 fused_ordering(337) 00:16:18.738 fused_ordering(338) 00:16:18.738 fused_ordering(339) 00:16:18.738 fused_ordering(340) 00:16:18.738 fused_ordering(341) 00:16:18.738 fused_ordering(342) 00:16:18.738 fused_ordering(343) 00:16:18.738 fused_ordering(344) 00:16:18.738 fused_ordering(345) 00:16:18.738 fused_ordering(346) 00:16:18.738 fused_ordering(347) 00:16:18.738 fused_ordering(348) 00:16:18.738 fused_ordering(349) 00:16:18.738 fused_ordering(350) 00:16:18.738 fused_ordering(351) 00:16:18.738 fused_ordering(352) 00:16:18.738 fused_ordering(353) 00:16:18.738 fused_ordering(354) 00:16:18.738 fused_ordering(355) 00:16:18.738 fused_ordering(356) 00:16:18.738 fused_ordering(357) 00:16:18.738 fused_ordering(358) 00:16:18.738 fused_ordering(359) 00:16:18.738 fused_ordering(360) 00:16:18.738 fused_ordering(361) 00:16:18.738 fused_ordering(362) 00:16:18.738 fused_ordering(363) 00:16:18.738 fused_ordering(364) 00:16:18.738 fused_ordering(365) 00:16:18.738 fused_ordering(366) 00:16:18.738 fused_ordering(367) 00:16:18.738 fused_ordering(368) 00:16:18.738 fused_ordering(369) 00:16:18.738 fused_ordering(370) 00:16:18.738 fused_ordering(371) 00:16:18.738 fused_ordering(372) 00:16:18.738 fused_ordering(373) 00:16:18.738 fused_ordering(374) 00:16:18.738 fused_ordering(375) 00:16:18.738 fused_ordering(376) 00:16:18.738 fused_ordering(377) 00:16:18.738 fused_ordering(378) 00:16:18.738 fused_ordering(379) 00:16:18.738 fused_ordering(380) 00:16:18.738 fused_ordering(381) 00:16:18.738 fused_ordering(382) 00:16:18.738 fused_ordering(383) 00:16:18.738 fused_ordering(384) 00:16:18.738 fused_ordering(385) 00:16:18.738 fused_ordering(386) 00:16:18.738 fused_ordering(387) 00:16:18.738 fused_ordering(388) 00:16:18.738 fused_ordering(389) 00:16:18.738 fused_ordering(390) 00:16:18.738 fused_ordering(391) 00:16:18.738 fused_ordering(392) 00:16:18.738 fused_ordering(393) 00:16:18.738 fused_ordering(394) 00:16:18.738 fused_ordering(395) 00:16:18.738 fused_ordering(396) 00:16:18.738 fused_ordering(397) 00:16:18.738 fused_ordering(398) 00:16:18.738 fused_ordering(399) 00:16:18.738 fused_ordering(400) 00:16:18.738 fused_ordering(401) 00:16:18.738 fused_ordering(402) 00:16:18.738 fused_ordering(403) 00:16:18.738 fused_ordering(404) 00:16:18.738 fused_ordering(405) 00:16:18.738 fused_ordering(406) 00:16:18.738 fused_ordering(407) 00:16:18.738 fused_ordering(408) 00:16:18.738 fused_ordering(409) 00:16:18.738 fused_ordering(410) 00:16:18.997 fused_ordering(411) 00:16:18.997 fused_ordering(412) 00:16:18.997 fused_ordering(413) 00:16:18.997 fused_ordering(414) 00:16:18.997 fused_ordering(415) 00:16:18.997 fused_ordering(416) 00:16:18.997 fused_ordering(417) 00:16:18.997 fused_ordering(418) 00:16:18.997 fused_ordering(419) 00:16:18.997 fused_ordering(420) 00:16:18.997 fused_ordering(421) 00:16:18.997 fused_ordering(422) 00:16:18.997 fused_ordering(423) 00:16:18.997 fused_ordering(424) 00:16:18.997 fused_ordering(425) 00:16:18.997 fused_ordering(426) 00:16:18.997 fused_ordering(427) 00:16:18.997 fused_ordering(428) 00:16:18.997 fused_ordering(429) 00:16:18.997 fused_ordering(430) 00:16:18.997 fused_ordering(431) 00:16:18.997 fused_ordering(432) 00:16:18.997 fused_ordering(433) 00:16:18.997 fused_ordering(434) 00:16:18.997 fused_ordering(435) 00:16:18.997 fused_ordering(436) 00:16:18.997 fused_ordering(437) 00:16:18.997 fused_ordering(438) 00:16:18.997 fused_ordering(439) 00:16:18.997 fused_ordering(440) 00:16:18.997 fused_ordering(441) 00:16:18.997 fused_ordering(442) 00:16:18.997 fused_ordering(443) 00:16:18.997 fused_ordering(444) 00:16:18.997 fused_ordering(445) 00:16:18.997 fused_ordering(446) 00:16:18.997 fused_ordering(447) 00:16:18.997 fused_ordering(448) 00:16:18.997 fused_ordering(449) 00:16:18.997 fused_ordering(450) 00:16:18.997 fused_ordering(451) 00:16:18.997 fused_ordering(452) 00:16:18.997 fused_ordering(453) 00:16:18.997 fused_ordering(454) 00:16:18.997 fused_ordering(455) 00:16:18.997 fused_ordering(456) 00:16:18.997 fused_ordering(457) 00:16:18.997 fused_ordering(458) 00:16:18.997 fused_ordering(459) 00:16:18.997 fused_ordering(460) 00:16:18.997 fused_ordering(461) 00:16:18.997 fused_ordering(462) 00:16:18.997 fused_ordering(463) 00:16:18.997 fused_ordering(464) 00:16:18.997 fused_ordering(465) 00:16:18.997 fused_ordering(466) 00:16:18.997 fused_ordering(467) 00:16:18.997 fused_ordering(468) 00:16:18.997 fused_ordering(469) 00:16:18.997 fused_ordering(470) 00:16:18.997 fused_ordering(471) 00:16:18.997 fused_ordering(472) 00:16:18.997 fused_ordering(473) 00:16:18.997 fused_ordering(474) 00:16:18.997 fused_ordering(475) 00:16:18.997 fused_ordering(476) 00:16:18.997 fused_ordering(477) 00:16:18.997 fused_ordering(478) 00:16:18.997 fused_ordering(479) 00:16:18.997 fused_ordering(480) 00:16:18.997 fused_ordering(481) 00:16:18.997 fused_ordering(482) 00:16:18.997 fused_ordering(483) 00:16:18.997 fused_ordering(484) 00:16:18.997 fused_ordering(485) 00:16:18.997 fused_ordering(486) 00:16:18.997 fused_ordering(487) 00:16:18.997 fused_ordering(488) 00:16:18.997 fused_ordering(489) 00:16:18.997 fused_ordering(490) 00:16:18.997 fused_ordering(491) 00:16:18.997 fused_ordering(492) 00:16:18.997 fused_ordering(493) 00:16:18.997 fused_ordering(494) 00:16:18.997 fused_ordering(495) 00:16:18.997 fused_ordering(496) 00:16:18.997 fused_ordering(497) 00:16:18.997 fused_ordering(498) 00:16:18.997 fused_ordering(499) 00:16:18.997 fused_ordering(500) 00:16:18.997 fused_ordering(501) 00:16:18.997 fused_ordering(502) 00:16:18.997 fused_ordering(503) 00:16:18.997 fused_ordering(504) 00:16:18.997 fused_ordering(505) 00:16:18.997 fused_ordering(506) 00:16:18.997 fused_ordering(507) 00:16:18.997 fused_ordering(508) 00:16:18.997 fused_ordering(509) 00:16:18.997 fused_ordering(510) 00:16:18.997 fused_ordering(511) 00:16:18.997 fused_ordering(512) 00:16:18.997 fused_ordering(513) 00:16:18.997 fused_ordering(514) 00:16:18.997 fused_ordering(515) 00:16:18.997 fused_ordering(516) 00:16:18.997 fused_ordering(517) 00:16:18.997 fused_ordering(518) 00:16:18.997 fused_ordering(519) 00:16:18.997 fused_ordering(520) 00:16:18.997 fused_ordering(521) 00:16:18.997 fused_ordering(522) 00:16:18.997 fused_ordering(523) 00:16:18.997 fused_ordering(524) 00:16:18.997 fused_ordering(525) 00:16:18.997 fused_ordering(526) 00:16:18.998 fused_ordering(527) 00:16:18.998 fused_ordering(528) 00:16:18.998 fused_ordering(529) 00:16:18.998 fused_ordering(530) 00:16:18.998 fused_ordering(531) 00:16:18.998 fused_ordering(532) 00:16:18.998 fused_ordering(533) 00:16:18.998 fused_ordering(534) 00:16:18.998 fused_ordering(535) 00:16:18.998 fused_ordering(536) 00:16:18.998 fused_ordering(537) 00:16:18.998 fused_ordering(538) 00:16:18.998 fused_ordering(539) 00:16:18.998 fused_ordering(540) 00:16:18.998 fused_ordering(541) 00:16:18.998 fused_ordering(542) 00:16:18.998 fused_ordering(543) 00:16:18.998 fused_ordering(544) 00:16:18.998 fused_ordering(545) 00:16:18.998 fused_ordering(546) 00:16:18.998 fused_ordering(547) 00:16:18.998 fused_ordering(548) 00:16:18.998 fused_ordering(549) 00:16:18.998 fused_ordering(550) 00:16:18.998 fused_ordering(551) 00:16:18.998 fused_ordering(552) 00:16:18.998 fused_ordering(553) 00:16:18.998 fused_ordering(554) 00:16:18.998 fused_ordering(555) 00:16:18.998 fused_ordering(556) 00:16:18.998 fused_ordering(557) 00:16:18.998 fused_ordering(558) 00:16:18.998 fused_ordering(559) 00:16:18.998 fused_ordering(560) 00:16:18.998 fused_ordering(561) 00:16:18.998 fused_ordering(562) 00:16:18.998 fused_ordering(563) 00:16:18.998 fused_ordering(564) 00:16:18.998 fused_ordering(565) 00:16:18.998 fused_ordering(566) 00:16:18.998 fused_ordering(567) 00:16:18.998 fused_ordering(568) 00:16:18.998 fused_ordering(569) 00:16:18.998 fused_ordering(570) 00:16:18.998 fused_ordering(571) 00:16:18.998 fused_ordering(572) 00:16:18.998 fused_ordering(573) 00:16:18.998 fused_ordering(574) 00:16:18.998 fused_ordering(575) 00:16:18.998 fused_ordering(576) 00:16:18.998 fused_ordering(577) 00:16:18.998 fused_ordering(578) 00:16:18.998 fused_ordering(579) 00:16:18.998 fused_ordering(580) 00:16:18.998 fused_ordering(581) 00:16:18.998 fused_ordering(582) 00:16:18.998 fused_ordering(583) 00:16:18.998 fused_ordering(584) 00:16:18.998 fused_ordering(585) 00:16:18.998 fused_ordering(586) 00:16:18.998 fused_ordering(587) 00:16:18.998 fused_ordering(588) 00:16:18.998 fused_ordering(589) 00:16:18.998 fused_ordering(590) 00:16:18.998 fused_ordering(591) 00:16:18.998 fused_ordering(592) 00:16:18.998 fused_ordering(593) 00:16:18.998 fused_ordering(594) 00:16:18.998 fused_ordering(595) 00:16:18.998 fused_ordering(596) 00:16:18.998 fused_ordering(597) 00:16:18.998 fused_ordering(598) 00:16:18.998 fused_ordering(599) 00:16:18.998 fused_ordering(600) 00:16:18.998 fused_ordering(601) 00:16:18.998 fused_ordering(602) 00:16:18.998 fused_ordering(603) 00:16:18.998 fused_ordering(604) 00:16:18.998 fused_ordering(605) 00:16:18.998 fused_ordering(606) 00:16:18.998 fused_ordering(607) 00:16:18.998 fused_ordering(608) 00:16:18.998 fused_ordering(609) 00:16:18.998 fused_ordering(610) 00:16:18.998 fused_ordering(611) 00:16:18.998 fused_ordering(612) 00:16:18.998 fused_ordering(613) 00:16:18.998 fused_ordering(614) 00:16:18.998 fused_ordering(615) 00:16:19.569 fused_ordering(616) 00:16:19.569 fused_ordering(617) 00:16:19.569 fused_ordering(618) 00:16:19.569 fused_ordering(619) 00:16:19.569 fused_ordering(620) 00:16:19.569 fused_ordering(621) 00:16:19.569 fused_ordering(622) 00:16:19.569 fused_ordering(623) 00:16:19.569 fused_ordering(624) 00:16:19.569 fused_ordering(625) 00:16:19.569 fused_ordering(626) 00:16:19.569 fused_ordering(627) 00:16:19.569 fused_ordering(628) 00:16:19.569 fused_ordering(629) 00:16:19.569 fused_ordering(630) 00:16:19.569 fused_ordering(631) 00:16:19.569 fused_ordering(632) 00:16:19.569 fused_ordering(633) 00:16:19.569 fused_ordering(634) 00:16:19.569 fused_ordering(635) 00:16:19.569 fused_ordering(636) 00:16:19.569 fused_ordering(637) 00:16:19.569 fused_ordering(638) 00:16:19.569 fused_ordering(639) 00:16:19.569 fused_ordering(640) 00:16:19.569 fused_ordering(641) 00:16:19.569 fused_ordering(642) 00:16:19.569 fused_ordering(643) 00:16:19.569 fused_ordering(644) 00:16:19.569 fused_ordering(645) 00:16:19.569 fused_ordering(646) 00:16:19.569 fused_ordering(647) 00:16:19.569 fused_ordering(648) 00:16:19.569 fused_ordering(649) 00:16:19.569 fused_ordering(650) 00:16:19.569 fused_ordering(651) 00:16:19.569 fused_ordering(652) 00:16:19.569 fused_ordering(653) 00:16:19.569 fused_ordering(654) 00:16:19.569 fused_ordering(655) 00:16:19.569 fused_ordering(656) 00:16:19.569 fused_ordering(657) 00:16:19.569 fused_ordering(658) 00:16:19.569 fused_ordering(659) 00:16:19.569 fused_ordering(660) 00:16:19.569 fused_ordering(661) 00:16:19.569 fused_ordering(662) 00:16:19.569 fused_ordering(663) 00:16:19.569 fused_ordering(664) 00:16:19.569 fused_ordering(665) 00:16:19.569 fused_ordering(666) 00:16:19.569 fused_ordering(667) 00:16:19.569 fused_ordering(668) 00:16:19.569 fused_ordering(669) 00:16:19.569 fused_ordering(670) 00:16:19.569 fused_ordering(671) 00:16:19.569 fused_ordering(672) 00:16:19.569 fused_ordering(673) 00:16:19.569 fused_ordering(674) 00:16:19.569 fused_ordering(675) 00:16:19.569 fused_ordering(676) 00:16:19.569 fused_ordering(677) 00:16:19.569 fused_ordering(678) 00:16:19.569 fused_ordering(679) 00:16:19.569 fused_ordering(680) 00:16:19.569 fused_ordering(681) 00:16:19.569 fused_ordering(682) 00:16:19.569 fused_ordering(683) 00:16:19.569 fused_ordering(684) 00:16:19.569 fused_ordering(685) 00:16:19.569 fused_ordering(686) 00:16:19.569 fused_ordering(687) 00:16:19.569 fused_ordering(688) 00:16:19.569 fused_ordering(689) 00:16:19.569 fused_ordering(690) 00:16:19.569 fused_ordering(691) 00:16:19.569 fused_ordering(692) 00:16:19.569 fused_ordering(693) 00:16:19.569 fused_ordering(694) 00:16:19.569 fused_ordering(695) 00:16:19.569 fused_ordering(696) 00:16:19.569 fused_ordering(697) 00:16:19.569 fused_ordering(698) 00:16:19.569 fused_ordering(699) 00:16:19.569 fused_ordering(700) 00:16:19.569 fused_ordering(701) 00:16:19.569 fused_ordering(702) 00:16:19.569 fused_ordering(703) 00:16:19.569 fused_ordering(704) 00:16:19.569 fused_ordering(705) 00:16:19.569 fused_ordering(706) 00:16:19.569 fused_ordering(707) 00:16:19.569 fused_ordering(708) 00:16:19.569 fused_ordering(709) 00:16:19.569 fused_ordering(710) 00:16:19.569 fused_ordering(711) 00:16:19.569 fused_ordering(712) 00:16:19.569 fused_ordering(713) 00:16:19.569 fused_ordering(714) 00:16:19.569 fused_ordering(715) 00:16:19.569 fused_ordering(716) 00:16:19.569 fused_ordering(717) 00:16:19.569 fused_ordering(718) 00:16:19.569 fused_ordering(719) 00:16:19.569 fused_ordering(720) 00:16:19.569 fused_ordering(721) 00:16:19.569 fused_ordering(722) 00:16:19.569 fused_ordering(723) 00:16:19.569 fused_ordering(724) 00:16:19.569 fused_ordering(725) 00:16:19.569 fused_ordering(726) 00:16:19.569 fused_ordering(727) 00:16:19.569 fused_ordering(728) 00:16:19.569 fused_ordering(729) 00:16:19.569 fused_ordering(730) 00:16:19.569 fused_ordering(731) 00:16:19.569 fused_ordering(732) 00:16:19.569 fused_ordering(733) 00:16:19.569 fused_ordering(734) 00:16:19.569 fused_ordering(735) 00:16:19.569 fused_ordering(736) 00:16:19.569 fused_ordering(737) 00:16:19.569 fused_ordering(738) 00:16:19.569 fused_ordering(739) 00:16:19.569 fused_ordering(740) 00:16:19.569 fused_ordering(741) 00:16:19.569 fused_ordering(742) 00:16:19.569 fused_ordering(743) 00:16:19.569 fused_ordering(744) 00:16:19.569 fused_ordering(745) 00:16:19.569 fused_ordering(746) 00:16:19.569 fused_ordering(747) 00:16:19.569 fused_ordering(748) 00:16:19.569 fused_ordering(749) 00:16:19.569 fused_ordering(750) 00:16:19.569 fused_ordering(751) 00:16:19.569 fused_ordering(752) 00:16:19.569 fused_ordering(753) 00:16:19.569 fused_ordering(754) 00:16:19.569 fused_ordering(755) 00:16:19.569 fused_ordering(756) 00:16:19.569 fused_ordering(757) 00:16:19.569 fused_ordering(758) 00:16:19.570 fused_ordering(759) 00:16:19.570 fused_ordering(760) 00:16:19.570 fused_ordering(761) 00:16:19.570 fused_ordering(762) 00:16:19.570 fused_ordering(763) 00:16:19.570 fused_ordering(764) 00:16:19.570 fused_ordering(765) 00:16:19.570 fused_ordering(766) 00:16:19.570 fused_ordering(767) 00:16:19.570 fused_ordering(768) 00:16:19.570 fused_ordering(769) 00:16:19.570 fused_ordering(770) 00:16:19.570 fused_ordering(771) 00:16:19.570 fused_ordering(772) 00:16:19.570 fused_ordering(773) 00:16:19.570 fused_ordering(774) 00:16:19.570 fused_ordering(775) 00:16:19.570 fused_ordering(776) 00:16:19.570 fused_ordering(777) 00:16:19.570 fused_ordering(778) 00:16:19.570 fused_ordering(779) 00:16:19.570 fused_ordering(780) 00:16:19.570 fused_ordering(781) 00:16:19.570 fused_ordering(782) 00:16:19.570 fused_ordering(783) 00:16:19.570 fused_ordering(784) 00:16:19.570 fused_ordering(785) 00:16:19.570 fused_ordering(786) 00:16:19.570 fused_ordering(787) 00:16:19.570 fused_ordering(788) 00:16:19.570 fused_ordering(789) 00:16:19.570 fused_ordering(790) 00:16:19.570 fused_ordering(791) 00:16:19.570 fused_ordering(792) 00:16:19.570 fused_ordering(793) 00:16:19.570 fused_ordering(794) 00:16:19.570 fused_ordering(795) 00:16:19.570 fused_ordering(796) 00:16:19.570 fused_ordering(797) 00:16:19.570 fused_ordering(798) 00:16:19.570 fused_ordering(799) 00:16:19.570 fused_ordering(800) 00:16:19.570 fused_ordering(801) 00:16:19.570 fused_ordering(802) 00:16:19.570 fused_ordering(803) 00:16:19.570 fused_ordering(804) 00:16:19.570 fused_ordering(805) 00:16:19.570 fused_ordering(806) 00:16:19.570 fused_ordering(807) 00:16:19.570 fused_ordering(808) 00:16:19.570 fused_ordering(809) 00:16:19.570 fused_ordering(810) 00:16:19.570 fused_ordering(811) 00:16:19.570 fused_ordering(812) 00:16:19.570 fused_ordering(813) 00:16:19.570 fused_ordering(814) 00:16:19.570 fused_ordering(815) 00:16:19.570 fused_ordering(816) 00:16:19.570 fused_ordering(817) 00:16:19.570 fused_ordering(818) 00:16:19.570 fused_ordering(819) 00:16:19.570 fused_ordering(820) 00:16:20.141 fused_ordering(821) 00:16:20.141 fused_ordering(822) 00:16:20.141 fused_ordering(823) 00:16:20.141 fused_ordering(824) 00:16:20.141 fused_ordering(825) 00:16:20.141 fused_ordering(826) 00:16:20.141 fused_ordering(827) 00:16:20.141 fused_ordering(828) 00:16:20.141 fused_ordering(829) 00:16:20.141 fused_ordering(830) 00:16:20.141 fused_ordering(831) 00:16:20.141 fused_ordering(832) 00:16:20.141 fused_ordering(833) 00:16:20.141 fused_ordering(834) 00:16:20.141 fused_ordering(835) 00:16:20.141 fused_ordering(836) 00:16:20.141 fused_ordering(837) 00:16:20.141 fused_ordering(838) 00:16:20.141 fused_ordering(839) 00:16:20.141 fused_ordering(840) 00:16:20.141 fused_ordering(841) 00:16:20.141 fused_ordering(842) 00:16:20.141 fused_ordering(843) 00:16:20.141 fused_ordering(844) 00:16:20.141 fused_ordering(845) 00:16:20.141 fused_ordering(846) 00:16:20.141 fused_ordering(847) 00:16:20.141 fused_ordering(848) 00:16:20.141 fused_ordering(849) 00:16:20.141 fused_ordering(850) 00:16:20.141 fused_ordering(851) 00:16:20.141 fused_ordering(852) 00:16:20.141 fused_ordering(853) 00:16:20.141 fused_ordering(854) 00:16:20.141 fused_ordering(855) 00:16:20.141 fused_ordering(856) 00:16:20.141 fused_ordering(857) 00:16:20.141 fused_ordering(858) 00:16:20.141 fused_ordering(859) 00:16:20.141 fused_ordering(860) 00:16:20.141 fused_ordering(861) 00:16:20.141 fused_ordering(862) 00:16:20.141 fused_ordering(863) 00:16:20.141 fused_ordering(864) 00:16:20.141 fused_ordering(865) 00:16:20.141 fused_ordering(866) 00:16:20.141 fused_ordering(867) 00:16:20.141 fused_ordering(868) 00:16:20.141 fused_ordering(869) 00:16:20.141 fused_ordering(870) 00:16:20.141 fused_ordering(871) 00:16:20.141 fused_ordering(872) 00:16:20.141 fused_ordering(873) 00:16:20.141 fused_ordering(874) 00:16:20.141 fused_ordering(875) 00:16:20.141 fused_ordering(876) 00:16:20.141 fused_ordering(877) 00:16:20.141 fused_ordering(878) 00:16:20.141 fused_ordering(879) 00:16:20.141 fused_ordering(880) 00:16:20.141 fused_ordering(881) 00:16:20.141 fused_ordering(882) 00:16:20.141 fused_ordering(883) 00:16:20.141 fused_ordering(884) 00:16:20.141 fused_ordering(885) 00:16:20.141 fused_ordering(886) 00:16:20.141 fused_ordering(887) 00:16:20.141 fused_ordering(888) 00:16:20.141 fused_ordering(889) 00:16:20.141 fused_ordering(890) 00:16:20.141 fused_ordering(891) 00:16:20.141 fused_ordering(892) 00:16:20.141 fused_ordering(893) 00:16:20.141 fused_ordering(894) 00:16:20.141 fused_ordering(895) 00:16:20.141 fused_ordering(896) 00:16:20.141 fused_ordering(897) 00:16:20.141 fused_ordering(898) 00:16:20.141 fused_ordering(899) 00:16:20.141 fused_ordering(900) 00:16:20.141 fused_ordering(901) 00:16:20.141 fused_ordering(902) 00:16:20.141 fused_ordering(903) 00:16:20.141 fused_ordering(904) 00:16:20.141 fused_ordering(905) 00:16:20.141 fused_ordering(906) 00:16:20.141 fused_ordering(907) 00:16:20.141 fused_ordering(908) 00:16:20.141 fused_ordering(909) 00:16:20.141 fused_ordering(910) 00:16:20.141 fused_ordering(911) 00:16:20.141 fused_ordering(912) 00:16:20.141 fused_ordering(913) 00:16:20.141 fused_ordering(914) 00:16:20.141 fused_ordering(915) 00:16:20.141 fused_ordering(916) 00:16:20.141 fused_ordering(917) 00:16:20.141 fused_ordering(918) 00:16:20.141 fused_ordering(919) 00:16:20.141 fused_ordering(920) 00:16:20.141 fused_ordering(921) 00:16:20.141 fused_ordering(922) 00:16:20.141 fused_ordering(923) 00:16:20.141 fused_ordering(924) 00:16:20.141 fused_ordering(925) 00:16:20.141 fused_ordering(926) 00:16:20.141 fused_ordering(927) 00:16:20.141 fused_ordering(928) 00:16:20.141 fused_ordering(929) 00:16:20.141 fused_ordering(930) 00:16:20.141 fused_ordering(931) 00:16:20.141 fused_ordering(932) 00:16:20.141 fused_ordering(933) 00:16:20.141 fused_ordering(934) 00:16:20.141 fused_ordering(935) 00:16:20.141 fused_ordering(936) 00:16:20.141 fused_ordering(937) 00:16:20.141 fused_ordering(938) 00:16:20.141 fused_ordering(939) 00:16:20.141 fused_ordering(940) 00:16:20.141 fused_ordering(941) 00:16:20.141 fused_ordering(942) 00:16:20.141 fused_ordering(943) 00:16:20.141 fused_ordering(944) 00:16:20.141 fused_ordering(945) 00:16:20.141 fused_ordering(946) 00:16:20.141 fused_ordering(947) 00:16:20.141 fused_ordering(948) 00:16:20.141 fused_ordering(949) 00:16:20.141 fused_ordering(950) 00:16:20.141 fused_ordering(951) 00:16:20.141 fused_ordering(952) 00:16:20.141 fused_ordering(953) 00:16:20.141 fused_ordering(954) 00:16:20.141 fused_ordering(955) 00:16:20.141 fused_ordering(956) 00:16:20.141 fused_ordering(957) 00:16:20.141 fused_ordering(958) 00:16:20.141 fused_ordering(959) 00:16:20.141 fused_ordering(960) 00:16:20.141 fused_ordering(961) 00:16:20.141 fused_ordering(962) 00:16:20.141 fused_ordering(963) 00:16:20.141 fused_ordering(964) 00:16:20.142 fused_ordering(965) 00:16:20.142 fused_ordering(966) 00:16:20.142 fused_ordering(967) 00:16:20.142 fused_ordering(968) 00:16:20.142 fused_ordering(969) 00:16:20.142 fused_ordering(970) 00:16:20.142 fused_ordering(971) 00:16:20.142 fused_ordering(972) 00:16:20.142 fused_ordering(973) 00:16:20.142 fused_ordering(974) 00:16:20.142 fused_ordering(975) 00:16:20.142 fused_ordering(976) 00:16:20.142 fused_ordering(977) 00:16:20.142 fused_ordering(978) 00:16:20.142 fused_ordering(979) 00:16:20.142 fused_ordering(980) 00:16:20.142 fused_ordering(981) 00:16:20.142 fused_ordering(982) 00:16:20.142 fused_ordering(983) 00:16:20.142 fused_ordering(984) 00:16:20.142 fused_ordering(985) 00:16:20.142 fused_ordering(986) 00:16:20.142 fused_ordering(987) 00:16:20.142 fused_ordering(988) 00:16:20.142 fused_ordering(989) 00:16:20.142 fused_ordering(990) 00:16:20.142 fused_ordering(991) 00:16:20.142 fused_ordering(992) 00:16:20.142 fused_ordering(993) 00:16:20.142 fused_ordering(994) 00:16:20.142 fused_ordering(995) 00:16:20.142 fused_ordering(996) 00:16:20.142 fused_ordering(997) 00:16:20.142 fused_ordering(998) 00:16:20.142 fused_ordering(999) 00:16:20.142 fused_ordering(1000) 00:16:20.142 fused_ordering(1001) 00:16:20.142 fused_ordering(1002) 00:16:20.142 fused_ordering(1003) 00:16:20.142 fused_ordering(1004) 00:16:20.142 fused_ordering(1005) 00:16:20.142 fused_ordering(1006) 00:16:20.142 fused_ordering(1007) 00:16:20.142 fused_ordering(1008) 00:16:20.142 fused_ordering(1009) 00:16:20.142 fused_ordering(1010) 00:16:20.142 fused_ordering(1011) 00:16:20.142 fused_ordering(1012) 00:16:20.142 fused_ordering(1013) 00:16:20.142 fused_ordering(1014) 00:16:20.142 fused_ordering(1015) 00:16:20.142 fused_ordering(1016) 00:16:20.142 fused_ordering(1017) 00:16:20.142 fused_ordering(1018) 00:16:20.142 fused_ordering(1019) 00:16:20.142 fused_ordering(1020) 00:16:20.142 fused_ordering(1021) 00:16:20.142 fused_ordering(1022) 00:16:20.142 fused_ordering(1023) 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.142 rmmod nvme_tcp 00:16:20.142 rmmod nvme_fabrics 00:16:20.142 rmmod nvme_keyring 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1883272 ']' 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1883272 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1883272 ']' 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1883272 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883272 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883272' 00:16:20.142 killing process with pid 1883272 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1883272 00:16:20.142 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1883272 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.401 10:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.313 10:32:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.314 00:16:22.314 real 0m13.857s 00:16:22.314 user 0m7.184s 00:16:22.314 sys 0m7.342s 00:16:22.314 10:32:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.314 10:32:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 ************************************ 00:16:22.314 END TEST nvmf_fused_ordering 00:16:22.314 ************************************ 00:16:22.314 10:32:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.314 10:32:27 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:22.314 10:32:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.314 10:32:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.314 10:32:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 ************************************ 00:16:22.575 START TEST nvmf_delete_subsystem 00:16:22.575 ************************************ 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:22.575 * Looking for test storage... 00:16:22.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.575 10:32:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:30.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:30.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:30.709 Found net devices under 0000:31:00.0: cvl_0_0 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:30.709 Found net devices under 0000:31:00.1: cvl_0_1 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.709 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:16:30.709 00:16:30.709 --- 10.0.0.2 ping statistics --- 00:16:30.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.709 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:16:30.710 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:16:30.969 00:16:30.969 --- 10.0.0.1 ping statistics --- 00:16:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.969 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:16:30.969 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.969 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:30.969 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.969 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.969 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1888708 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1888708 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1888708 ']' 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.970 10:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:30.970 [2024-07-22 10:32:36.499760] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:16:30.970 [2024-07-22 10:32:36.499827] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.970 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.970 [2024-07-22 10:32:36.577348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.970 [2024-07-22 10:32:36.616886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.970 [2024-07-22 10:32:36.616926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.970 [2024-07-22 10:32:36.616934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.970 [2024-07-22 10:32:36.616941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.970 [2024-07-22 10:32:36.616946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.970 [2024-07-22 10:32:36.617092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.970 [2024-07-22 10:32:36.617093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 [2024-07-22 10:32:37.331284] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 [2024-07-22 10:32:37.355463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 NULL1 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 Delay0 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1888912 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:31.908 10:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:31.908 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.909 [2024-07-22 10:32:37.442053] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:33.893 10:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.893 10:32:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.893 10:32:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 [2024-07-22 10:32:39.661382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1666b60 is same with the state(5) to be set 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Write completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.241 starting I/O failed: -6 00:16:34.241 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Read completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 Write completed with error (sct=0, sc=8) 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:34.242 starting I/O failed: -6 00:16:35.180 [2024-07-22 10:32:40.623264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16685f0 is same with the state(5) to be set 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 [2024-07-22 10:32:40.665379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb4ec00cff0 is same with the state(5) to be set 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 [2024-07-22 10:32:40.665518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667580 is same with the state(5) to be set 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 [2024-07-22 10:32:40.665631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1666f40 is same with the state(5) to be set 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Write completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.180 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Read completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 Write completed with error (sct=0, sc=8) 00:16:35.181 [2024-07-22 10:32:40.665936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb4ec00d770 is same with the state(5) to be set 00:16:35.181 Initializing NVMe Controllers 00:16:35.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.181 Controller IO queue size 128, less than required. 00:16:35.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:35.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:35.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:35.181 Initialization complete. Launching workers. 00:16:35.181 ======================================================== 00:16:35.181 Latency(us) 00:16:35.181 Device Information : IOPS MiB/s Average min max 00:16:35.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.66 0.09 879748.25 247.98 1008645.65 00:16:35.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.17 0.09 930303.52 371.66 1009295.76 00:16:35.181 ======================================================== 00:16:35.181 Total : 351.83 0.17 904918.63 247.98 1009295.76 00:16:35.181 00:16:35.181 [2024-07-22 10:32:40.666435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16685f0 (9): Bad file descriptor 00:16:35.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:35.181 10:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.181 10:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:35.181 10:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1888912 00:16:35.181 10:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1888912 00:16:35.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1888912) - No such process 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1888912 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1888912 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1888912 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:35.750 [2024-07-22 10:32:41.196641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1889597 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:35.750 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:35.750 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.750 [2024-07-22 10:32:41.268535] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:36.320 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:36.320 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:36.320 10:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:36.580 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:36.580 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:36.580 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:37.151 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.151 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:37.151 10:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:37.739 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.739 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:37.739 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.306 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.306 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:38.306 10:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.565 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.565 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:38.565 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.134 Initializing NVMe Controllers 00:16:39.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.134 Controller IO queue size 128, less than required. 00:16:39.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:39.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:39.134 Initialization complete. Launching workers. 00:16:39.134 ======================================================== 00:16:39.134 Latency(us) 00:16:39.134 Device Information : IOPS MiB/s Average min max 00:16:39.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002369.11 1000138.33 1041663.90 00:16:39.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002991.19 1000269.03 1009053.67 00:16:39.134 ======================================================== 00:16:39.134 Total : 256.00 0.12 1002680.15 1000138.33 1041663.90 00:16:39.134 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1889597 00:16:39.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1889597) - No such process 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1889597 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.134 rmmod nvme_tcp 00:16:39.134 rmmod nvme_fabrics 00:16:39.134 rmmod nvme_keyring 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1888708 ']' 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1888708 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1888708 ']' 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1888708 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.134 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888708 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888708' 00:16:39.394 killing process with pid 1888708 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1888708 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1888708 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.394 10:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.938 10:32:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.938 00:16:41.938 real 0m19.035s 00:16:41.938 user 0m31.386s 00:16:41.938 sys 0m7.003s 00:16:41.938 10:32:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.938 10:32:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:41.938 ************************************ 00:16:41.938 END TEST nvmf_delete_subsystem 00:16:41.938 ************************************ 00:16:41.938 10:32:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.938 10:32:47 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:41.938 10:32:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.938 10:32:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.938 10:32:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.938 ************************************ 00:16:41.938 START TEST nvmf_ns_masking 00:16:41.938 ************************************ 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:41.938 * Looking for test storage... 00:16:41.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=84d6ab43-82b7-4b66-b3d5-a3384e57a085 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bf90d87f-ebd2-4d3b-b7a5-c450c837297a 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:41.938 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b28dfd3b-88c0-46e9-b057-6ca99aaa3d18 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.939 10:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:50.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:50.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:50.073 Found net devices under 0000:31:00.0: cvl_0_0 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:50.073 Found net devices under 0000:31:00.1: cvl_0_1 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.073 10:32:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:16:50.073 00:16:50.073 --- 10.0.0.2 ping statistics --- 00:16:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.073 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:16:50.073 00:16:50.073 --- 10.0.0.1 ping statistics --- 00:16:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.073 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.073 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1894975 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1894975 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1894975 ']' 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.074 10:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.074 [2024-07-22 10:32:55.297481] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:16:50.074 [2024-07-22 10:32:55.297537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.074 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.074 [2024-07-22 10:32:55.370608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.074 [2024-07-22 10:32:55.402315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.074 [2024-07-22 10:32:55.402357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.074 [2024-07-22 10:32:55.402365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.074 [2024-07-22 10:32:55.402373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.074 [2024-07-22 10:32:55.402379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.074 [2024-07-22 10:32:55.402418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.646 [2024-07-22 10:32:56.251137] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:50.646 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:50.906 Malloc1 00:16:50.906 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:51.166 Malloc2 00:16:51.166 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.166 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:51.426 10:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.426 [2024-07-22 10:32:57.107035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b28dfd3b-88c0-46e9-b057-6ca99aaa3d18 -a 10.0.0.2 -s 4420 -i 4 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:51.687 10:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:54.229 [ 0]:0x1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16453d8b6419462291ccb3048f3c783c 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16453d8b6419462291ccb3048f3c783c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.229 [ 0]:0x1 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16453d8b6419462291ccb3048f3c783c 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16453d8b6419462291ccb3048f3c783c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:54.229 [ 1]:0x2 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.229 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.489 10:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:54.489 10:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:54.489 10:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b28dfd3b-88c0-46e9-b057-6ca99aaa3d18 -a 10.0.0.2 -s 4420 -i 4 00:16:54.748 10:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:54.749 10:33:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:54.749 10:33:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.749 10:33:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:54.749 10:33:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:54.749 10:33:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:56.657 [ 0]:0x2 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:56.657 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:56.919 [ 0]:0x1 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16453d8b6419462291ccb3048f3c783c 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16453d8b6419462291ccb3048f3c783c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:56.919 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.179 [ 1]:0x2 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:57.179 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:57.438 [ 0]:0x2 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.438 10:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b28dfd3b-88c0-46e9-b057-6ca99aaa3d18 -a 10.0.0.2 -s 4420 -i 4 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:57.697 10:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.235 [ 0]:0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16453d8b6419462291ccb3048f3c783c 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16453d8b6419462291ccb3048f3c783c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.235 [ 1]:0x2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.235 [ 0]:0x2 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.235 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:00.496 10:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:00.496 [2024-07-22 10:33:06.116727] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:00.496 request: 00:17:00.496 { 00:17:00.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.496 "nsid": 2, 00:17:00.496 "host": "nqn.2016-06.io.spdk:host1", 00:17:00.496 "method": "nvmf_ns_remove_host", 00:17:00.496 "req_id": 1 00:17:00.496 } 00:17:00.496 Got JSON-RPC error response 00:17:00.496 response: 00:17:00.496 { 00:17:00.496 "code": -32602, 00:17:00.496 "message": "Invalid parameters" 00:17:00.496 } 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.496 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.497 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:00.497 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.497 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.756 [ 0]:0x2 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99106b7bbad843269ba9cc1ff8915c85 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99106b7bbad843269ba9cc1ff8915c85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1897548 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1897548 /var/tmp/host.sock 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1897548 ']' 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.756 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:00.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:00.757 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.757 10:33:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.757 [2024-07-22 10:33:06.353589] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:17:00.757 [2024-07-22 10:33:06.353638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897548 ] 00:17:00.757 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.757 [2024-07-22 10:33:06.434829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.016 [2024-07-22 10:33:06.466724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.588 10:33:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.588 10:33:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:01.588 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.849 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:01.849 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 84d6ab43-82b7-4b66-b3d5-a3384e57a085 00:17:01.849 10:33:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:01.849 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84D6AB4382B74B66B3D5A3384E57A085 -i 00:17:02.109 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bf90d87f-ebd2-4d3b-b7a5-c450c837297a 00:17:02.109 10:33:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:02.109 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BF90D87FEBD24D3BB7A5C450C837297A -i 00:17:02.109 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:02.370 10:33:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:02.370 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:02.370 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:02.942 nvme0n1 00:17:02.942 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:02.942 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:03.202 nvme1n2 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:03.202 10:33:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:03.462 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 84d6ab43-82b7-4b66-b3d5-a3384e57a085 == \8\4\d\6\a\b\4\3\-\8\2\b\7\-\4\b\6\6\-\b\3\d\5\-\a\3\3\8\4\e\5\7\a\0\8\5 ]] 00:17:03.462 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:03.463 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:03.463 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bf90d87f-ebd2-4d3b-b7a5-c450c837297a == \b\f\9\0\d\8\7\f\-\e\b\d\2\-\4\d\3\b\-\b\7\a\5\-\c\4\5\0\c\8\3\7\2\9\7\a ]] 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1897548 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1897548 ']' 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1897548 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1897548 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1897548' 00:17:03.724 killing process with pid 1897548 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1897548 00:17:03.724 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1897548 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.984 rmmod nvme_tcp 00:17:03.984 rmmod nvme_fabrics 00:17:03.984 rmmod nvme_keyring 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1894975 ']' 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1894975 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1894975 ']' 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1894975 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.984 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1894975 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1894975' 00:17:04.245 killing process with pid 1894975 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1894975 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1894975 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.245 10:33:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.786 10:33:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.786 00:17:06.786 real 0m24.787s 00:17:06.786 user 0m24.190s 00:17:06.786 sys 0m7.830s 00:17:06.786 10:33:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.786 10:33:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:06.786 ************************************ 00:17:06.786 END TEST nvmf_ns_masking 00:17:06.786 ************************************ 00:17:06.786 10:33:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.786 10:33:11 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:17:06.786 10:33:11 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:06.786 10:33:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.786 10:33:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.786 10:33:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.786 ************************************ 00:17:06.786 START TEST nvmf_nvme_cli 00:17:06.786 ************************************ 00:17:06.786 10:33:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:06.786 * Looking for test storage... 00:17:06.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.786 10:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:14.932 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:14.932 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:14.932 Found net devices under 0000:31:00.0: cvl_0_0 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.932 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:14.933 Found net devices under 0000:31:00.1: cvl_0_1 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.933 10:33:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:17:14.933 00:17:14.933 --- 10.0.0.2 ping statistics --- 00:17:14.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.933 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:17:14.933 00:17:14.933 --- 10.0.0.1 ping statistics --- 00:17:14.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.933 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1903364 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1903364 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1903364 ']' 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.933 10:33:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.933 [2024-07-22 10:33:20.399343] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:17:14.933 [2024-07-22 10:33:20.399416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.933 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.933 [2024-07-22 10:33:20.479278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.933 [2024-07-22 10:33:20.521812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.933 [2024-07-22 10:33:20.521852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.933 [2024-07-22 10:33:20.521860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.933 [2024-07-22 10:33:20.521867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.933 [2024-07-22 10:33:20.521872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.933 [2024-07-22 10:33:20.522019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.933 [2024-07-22 10:33:20.522135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.933 [2024-07-22 10:33:20.522278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.933 [2024-07-22 10:33:20.522279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.503 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.503 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:15.503 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.503 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.503 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 [2024-07-22 10:33:21.229097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 Malloc0 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 Malloc1 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 [2024-07-22 10:33:21.318881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:17:15.763 00:17:15.763 Discovery Log Number of Records 2, Generation counter 2 00:17:15.763 =====Discovery Log Entry 0====== 00:17:15.763 trtype: tcp 00:17:15.763 adrfam: ipv4 00:17:15.763 subtype: current discovery subsystem 00:17:15.763 treq: not required 00:17:15.763 portid: 0 00:17:15.763 trsvcid: 4420 00:17:15.763 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:15.763 traddr: 10.0.0.2 00:17:15.763 eflags: explicit discovery connections, duplicate discovery information 00:17:15.763 sectype: none 00:17:15.763 =====Discovery Log Entry 1====== 00:17:15.763 trtype: tcp 00:17:15.763 adrfam: ipv4 00:17:15.763 subtype: nvme subsystem 00:17:15.763 treq: not required 00:17:15.763 portid: 0 00:17:15.763 trsvcid: 4420 00:17:15.763 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:15.763 traddr: 10.0.0.2 00:17:15.763 eflags: none 00:17:15.763 sectype: none 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:15.763 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:15.764 10:33:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:17.677 10:33:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:19.686 /dev/nvme0n1 ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:19.686 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.946 rmmod nvme_tcp 00:17:19.946 rmmod nvme_fabrics 00:17:19.946 rmmod nvme_keyring 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1903364 ']' 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1903364 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1903364 ']' 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1903364 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.946 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1903364 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1903364' 00:17:20.207 killing process with pid 1903364 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1903364 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1903364 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.207 10:33:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.748 10:33:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.748 00:17:22.749 real 0m15.907s 00:17:22.749 user 0m23.515s 00:17:22.749 sys 0m6.632s 00:17:22.749 10:33:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.749 10:33:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 ************************************ 00:17:22.749 END TEST nvmf_nvme_cli 00:17:22.749 ************************************ 00:17:22.749 10:33:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.749 10:33:27 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:17:22.749 10:33:27 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:22.749 10:33:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.749 10:33:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.749 10:33:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 ************************************ 00:17:22.749 START TEST nvmf_vfio_user 00:17:22.749 ************************************ 00:17:22.749 10:33:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:22.749 * Looking for test storage... 00:17:22.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1904865 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1904865' 00:17:22.749 Process pid: 1904865 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1904865 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1904865 ']' 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.749 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 [2024-07-22 10:33:28.172562] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:17:22.749 [2024-07-22 10:33:28.172609] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.749 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.749 [2024-07-22 10:33:28.238771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.749 [2024-07-22 10:33:28.271707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.749 [2024-07-22 10:33:28.271743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.749 [2024-07-22 10:33:28.271751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.749 [2024-07-22 10:33:28.271757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.749 [2024-07-22 10:33:28.271763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.749 [2024-07-22 10:33:28.271898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.749 [2024-07-22 10:33:28.272011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.749 [2024-07-22 10:33:28.272171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.749 [2024-07-22 10:33:28.272172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.320 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.320 10:33:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:23.320 10:33:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:24.264 10:33:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:24.525 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:24.525 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:24.525 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:24.525 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:24.525 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:24.787 Malloc1 00:17:24.787 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:24.787 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:25.047 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:25.307 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.307 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:25.307 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:25.307 Malloc2 00:17:25.307 10:33:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:25.567 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:25.827 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:25.827 [2024-07-22 10:33:31.515018] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:17:25.827 [2024-07-22 10:33:31.515061] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905556 ] 00:17:25.827 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.089 [2024-07-22 10:33:31.546025] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:26.089 [2024-07-22 10:33:31.554730] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.089 [2024-07-22 10:33:31.554750] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f616ed0b000 00:17:26.089 [2024-07-22 10:33:31.555727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.556727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.557729] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.558731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.559756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.560738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.561746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.562749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.089 [2024-07-22 10:33:31.563762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.089 [2024-07-22 10:33:31.563772] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f616dad0000 00:17:26.089 [2024-07-22 10:33:31.565102] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.089 [2024-07-22 10:33:31.582019] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:26.089 [2024-07-22 10:33:31.582049] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:26.089 [2024-07-22 10:33:31.586891] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:26.089 [2024-07-22 10:33:31.586934] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:26.089 [2024-07-22 10:33:31.587018] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:26.089 [2024-07-22 10:33:31.587036] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:26.089 [2024-07-22 10:33:31.587041] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:26.089 [2024-07-22 10:33:31.587888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:26.089 [2024-07-22 10:33:31.587899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:26.089 [2024-07-22 10:33:31.587906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:26.089 [2024-07-22 10:33:31.588899] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:26.089 [2024-07-22 10:33:31.588909] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:26.089 [2024-07-22 10:33:31.588917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.589908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:26.090 [2024-07-22 10:33:31.589916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.590910] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:26.090 [2024-07-22 10:33:31.590919] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:26.090 [2024-07-22 10:33:31.590924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.590934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.591039] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:26.090 [2024-07-22 10:33:31.591044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.591049] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:26.090 [2024-07-22 10:33:31.591918] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:26.090 [2024-07-22 10:33:31.592924] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:26.090 [2024-07-22 10:33:31.593930] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:26.090 [2024-07-22 10:33:31.594928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:26.090 [2024-07-22 10:33:31.594978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.090 [2024-07-22 10:33:31.595941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:26.090 [2024-07-22 10:33:31.595948] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.090 [2024-07-22 10:33:31.595953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.595974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:26.090 [2024-07-22 10:33:31.595989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596004] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.090 [2024-07-22 10:33:31.596010] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.090 [2024-07-22 10:33:31.596013] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.090 [2024-07-22 10:33:31.596028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596073] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:26.090 [2024-07-22 10:33:31.596078] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:26.090 [2024-07-22 10:33:31.596083] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:26.090 [2024-07-22 10:33:31.596087] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:26.090 [2024-07-22 10:33:31.596092] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:26.090 [2024-07-22 10:33:31.596097] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:26.090 [2024-07-22 10:33:31.596103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.090 [2024-07-22 10:33:31.596153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.090 [2024-07-22 10:33:31.596161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.090 [2024-07-22 10:33:31.596169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.090 [2024-07-22 10:33:31.596174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596206] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:26.090 [2024-07-22 10:33:31.596211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596323] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:26.090 [2024-07-22 10:33:31.596327] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:26.090 [2024-07-22 10:33:31.596330] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.090 [2024-07-22 10:33:31.596336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596361] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:26.090 [2024-07-22 10:33:31.596370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596384] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.090 [2024-07-22 10:33:31.596388] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.090 [2024-07-22 10:33:31.596392] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.090 [2024-07-22 10:33:31.596402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596443] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.090 [2024-07-22 10:33:31.596447] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.090 [2024-07-22 10:33:31.596450] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.090 [2024-07-22 10:33:31.596456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.090 [2024-07-22 10:33:31.596466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:26.090 [2024-07-22 10:33:31.596474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:26.090 [2024-07-22 10:33:31.596511] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.091 [2024-07-22 10:33:31.596515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:26.091 [2024-07-22 10:33:31.596520] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:26.091 [2024-07-22 10:33:31.596537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596620] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:26.091 [2024-07-22 10:33:31.596625] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:26.091 [2024-07-22 10:33:31.596629] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:26.091 [2024-07-22 10:33:31.596632] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:26.091 [2024-07-22 10:33:31.596635] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:26.091 [2024-07-22 10:33:31.596642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:26.091 [2024-07-22 10:33:31.596649] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:26.091 [2024-07-22 10:33:31.596653] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:26.091 [2024-07-22 10:33:31.596657] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.091 [2024-07-22 10:33:31.596662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596670] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:26.091 [2024-07-22 10:33:31.596674] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.091 [2024-07-22 10:33:31.596677] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.091 [2024-07-22 10:33:31.596683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596691] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:26.091 [2024-07-22 10:33:31.596695] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:26.091 [2024-07-22 10:33:31.596698] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.091 [2024-07-22 10:33:31.596704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:26.091 [2024-07-22 10:33:31.596711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:26.091 [2024-07-22 10:33:31.596740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:26.091 ===================================================== 00:17:26.091 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:26.091 ===================================================== 00:17:26.091 Controller Capabilities/Features 00:17:26.091 ================================ 00:17:26.091 Vendor ID: 4e58 00:17:26.091 Subsystem Vendor ID: 4e58 00:17:26.091 Serial Number: SPDK1 00:17:26.091 Model Number: SPDK bdev Controller 00:17:26.091 Firmware Version: 24.09 00:17:26.091 Recommended Arb Burst: 6 00:17:26.091 IEEE OUI Identifier: 8d 6b 50 00:17:26.091 Multi-path I/O 00:17:26.091 May have multiple subsystem ports: Yes 00:17:26.091 May have multiple controllers: Yes 00:17:26.091 Associated with SR-IOV VF: No 00:17:26.091 Max Data Transfer Size: 131072 00:17:26.091 Max Number of Namespaces: 32 00:17:26.091 Max Number of I/O Queues: 127 00:17:26.091 NVMe Specification Version (VS): 1.3 00:17:26.091 NVMe Specification Version (Identify): 1.3 00:17:26.091 Maximum Queue Entries: 256 00:17:26.091 Contiguous Queues Required: Yes 00:17:26.091 Arbitration Mechanisms Supported 00:17:26.091 Weighted Round Robin: Not Supported 00:17:26.091 Vendor Specific: Not Supported 00:17:26.091 Reset Timeout: 15000 ms 00:17:26.091 Doorbell Stride: 4 bytes 00:17:26.091 NVM Subsystem Reset: Not Supported 00:17:26.091 Command Sets Supported 00:17:26.091 NVM Command Set: Supported 00:17:26.091 Boot Partition: Not Supported 00:17:26.091 Memory Page Size Minimum: 4096 bytes 00:17:26.091 Memory Page Size Maximum: 4096 bytes 00:17:26.091 Persistent Memory Region: Not Supported 00:17:26.091 Optional Asynchronous Events Supported 00:17:26.091 Namespace Attribute Notices: Supported 00:17:26.091 Firmware Activation Notices: Not Supported 00:17:26.091 ANA Change Notices: Not Supported 00:17:26.091 PLE Aggregate Log Change Notices: Not Supported 00:17:26.091 LBA Status Info Alert Notices: Not Supported 00:17:26.091 EGE Aggregate Log Change Notices: Not Supported 00:17:26.091 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.091 Zone Descriptor Change Notices: Not Supported 00:17:26.091 Discovery Log Change Notices: Not Supported 00:17:26.091 Controller Attributes 00:17:26.091 128-bit Host Identifier: Supported 00:17:26.091 Non-Operational Permissive Mode: Not Supported 00:17:26.091 NVM Sets: Not Supported 00:17:26.091 Read Recovery Levels: Not Supported 00:17:26.091 Endurance Groups: Not Supported 00:17:26.091 Predictable Latency Mode: Not Supported 00:17:26.091 Traffic Based Keep ALive: Not Supported 00:17:26.091 Namespace Granularity: Not Supported 00:17:26.091 SQ Associations: Not Supported 00:17:26.091 UUID List: Not Supported 00:17:26.091 Multi-Domain Subsystem: Not Supported 00:17:26.091 Fixed Capacity Management: Not Supported 00:17:26.091 Variable Capacity Management: Not Supported 00:17:26.091 Delete Endurance Group: Not Supported 00:17:26.091 Delete NVM Set: Not Supported 00:17:26.091 Extended LBA Formats Supported: Not Supported 00:17:26.091 Flexible Data Placement Supported: Not Supported 00:17:26.091 00:17:26.091 Controller Memory Buffer Support 00:17:26.091 ================================ 00:17:26.091 Supported: No 00:17:26.091 00:17:26.091 Persistent Memory Region Support 00:17:26.091 ================================ 00:17:26.091 Supported: No 00:17:26.091 00:17:26.091 Admin Command Set Attributes 00:17:26.091 ============================ 00:17:26.091 Security Send/Receive: Not Supported 00:17:26.091 Format NVM: Not Supported 00:17:26.091 Firmware Activate/Download: Not Supported 00:17:26.091 Namespace Management: Not Supported 00:17:26.091 Device Self-Test: Not Supported 00:17:26.091 Directives: Not Supported 00:17:26.091 NVMe-MI: Not Supported 00:17:26.091 Virtualization Management: Not Supported 00:17:26.091 Doorbell Buffer Config: Not Supported 00:17:26.091 Get LBA Status Capability: Not Supported 00:17:26.091 Command & Feature Lockdown Capability: Not Supported 00:17:26.091 Abort Command Limit: 4 00:17:26.091 Async Event Request Limit: 4 00:17:26.091 Number of Firmware Slots: N/A 00:17:26.091 Firmware Slot 1 Read-Only: N/A 00:17:26.091 Firmware Activation Without Reset: N/A 00:17:26.091 Multiple Update Detection Support: N/A 00:17:26.091 Firmware Update Granularity: No Information Provided 00:17:26.091 Per-Namespace SMART Log: No 00:17:26.091 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.091 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:26.091 Command Effects Log Page: Supported 00:17:26.091 Get Log Page Extended Data: Supported 00:17:26.091 Telemetry Log Pages: Not Supported 00:17:26.091 Persistent Event Log Pages: Not Supported 00:17:26.091 Supported Log Pages Log Page: May Support 00:17:26.091 Commands Supported & Effects Log Page: Not Supported 00:17:26.091 Feature Identifiers & Effects Log Page:May Support 00:17:26.091 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.091 Data Area 4 for Telemetry Log: Not Supported 00:17:26.091 Error Log Page Entries Supported: 128 00:17:26.091 Keep Alive: Supported 00:17:26.091 Keep Alive Granularity: 10000 ms 00:17:26.091 00:17:26.091 NVM Command Set Attributes 00:17:26.092 ========================== 00:17:26.092 Submission Queue Entry Size 00:17:26.092 Max: 64 00:17:26.092 Min: 64 00:17:26.092 Completion Queue Entry Size 00:17:26.092 Max: 16 00:17:26.092 Min: 16 00:17:26.092 Number of Namespaces: 32 00:17:26.092 Compare Command: Supported 00:17:26.092 Write Uncorrectable Command: Not Supported 00:17:26.092 Dataset Management Command: Supported 00:17:26.092 Write Zeroes Command: Supported 00:17:26.092 Set Features Save Field: Not Supported 00:17:26.092 Reservations: Not Supported 00:17:26.092 Timestamp: Not Supported 00:17:26.092 Copy: Supported 00:17:26.092 Volatile Write Cache: Present 00:17:26.092 Atomic Write Unit (Normal): 1 00:17:26.092 Atomic Write Unit (PFail): 1 00:17:26.092 Atomic Compare & Write Unit: 1 00:17:26.092 Fused Compare & Write: Supported 00:17:26.092 Scatter-Gather List 00:17:26.092 SGL Command Set: Supported (Dword aligned) 00:17:26.092 SGL Keyed: Not Supported 00:17:26.092 SGL Bit Bucket Descriptor: Not Supported 00:17:26.092 SGL Metadata Pointer: Not Supported 00:17:26.092 Oversized SGL: Not Supported 00:17:26.092 SGL Metadata Address: Not Supported 00:17:26.092 SGL Offset: Not Supported 00:17:26.092 Transport SGL Data Block: Not Supported 00:17:26.092 Replay Protected Memory Block: Not Supported 00:17:26.092 00:17:26.092 Firmware Slot Information 00:17:26.092 ========================= 00:17:26.092 Active slot: 1 00:17:26.092 Slot 1 Firmware Revision: 24.09 00:17:26.092 00:17:26.092 00:17:26.092 Commands Supported and Effects 00:17:26.092 ============================== 00:17:26.092 Admin Commands 00:17:26.092 -------------- 00:17:26.092 Get Log Page (02h): Supported 00:17:26.092 Identify (06h): Supported 00:17:26.092 Abort (08h): Supported 00:17:26.092 Set Features (09h): Supported 00:17:26.092 Get Features (0Ah): Supported 00:17:26.092 Asynchronous Event Request (0Ch): Supported 00:17:26.092 Keep Alive (18h): Supported 00:17:26.092 I/O Commands 00:17:26.092 ------------ 00:17:26.092 Flush (00h): Supported LBA-Change 00:17:26.092 Write (01h): Supported LBA-Change 00:17:26.092 Read (02h): Supported 00:17:26.092 Compare (05h): Supported 00:17:26.092 Write Zeroes (08h): Supported LBA-Change 00:17:26.092 Dataset Management (09h): Supported LBA-Change 00:17:26.092 Copy (19h): Supported LBA-Change 00:17:26.092 00:17:26.092 Error Log 00:17:26.092 ========= 00:17:26.092 00:17:26.092 Arbitration 00:17:26.092 =========== 00:17:26.092 Arbitration Burst: 1 00:17:26.092 00:17:26.092 Power Management 00:17:26.092 ================ 00:17:26.092 Number of Power States: 1 00:17:26.092 Current Power State: Power State #0 00:17:26.092 Power State #0: 00:17:26.092 Max Power: 0.00 W 00:17:26.092 Non-Operational State: Operational 00:17:26.092 Entry Latency: Not Reported 00:17:26.092 Exit Latency: Not Reported 00:17:26.092 Relative Read Throughput: 0 00:17:26.092 Relative Read Latency: 0 00:17:26.092 Relative Write Throughput: 0 00:17:26.092 Relative Write Latency: 0 00:17:26.092 Idle Power: Not Reported 00:17:26.092 Active Power: Not Reported 00:17:26.092 Non-Operational Permissive Mode: Not Supported 00:17:26.092 00:17:26.092 Health Information 00:17:26.092 ================== 00:17:26.092 Critical Warnings: 00:17:26.092 Available Spare Space: OK 00:17:26.092 Temperature: OK 00:17:26.092 Device Reliability: OK 00:17:26.092 Read Only: No 00:17:26.092 Volatile Memory Backup: OK 00:17:26.092 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.092 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.092 Available Spare: 0% 00:17:26.092 Available Sp[2024-07-22 10:33:31.596843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:26.092 [2024-07-22 10:33:31.596852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:26.092 [2024-07-22 10:33:31.596879] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:26.092 [2024-07-22 10:33:31.596888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.092 [2024-07-22 10:33:31.596895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.092 [2024-07-22 10:33:31.596901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.092 [2024-07-22 10:33:31.596907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.092 [2024-07-22 10:33:31.600403] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:26.092 [2024-07-22 10:33:31.600414] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:26.092 [2024-07-22 10:33:31.600961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:26.092 [2024-07-22 10:33:31.601000] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:26.092 [2024-07-22 10:33:31.601005] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:26.092 [2024-07-22 10:33:31.601967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:26.092 [2024-07-22 10:33:31.601977] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:26.092 [2024-07-22 10:33:31.602034] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:26.092 [2024-07-22 10:33:31.603992] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.092 are Threshold: 0% 00:17:26.092 Life Percentage Used: 0% 00:17:26.092 Data Units Read: 0 00:17:26.092 Data Units Written: 0 00:17:26.092 Host Read Commands: 0 00:17:26.092 Host Write Commands: 0 00:17:26.092 Controller Busy Time: 0 minutes 00:17:26.092 Power Cycles: 0 00:17:26.092 Power On Hours: 0 hours 00:17:26.092 Unsafe Shutdowns: 0 00:17:26.092 Unrecoverable Media Errors: 0 00:17:26.092 Lifetime Error Log Entries: 0 00:17:26.092 Warning Temperature Time: 0 minutes 00:17:26.092 Critical Temperature Time: 0 minutes 00:17:26.092 00:17:26.092 Number of Queues 00:17:26.092 ================ 00:17:26.092 Number of I/O Submission Queues: 127 00:17:26.092 Number of I/O Completion Queues: 127 00:17:26.092 00:17:26.092 Active Namespaces 00:17:26.092 ================= 00:17:26.092 Namespace ID:1 00:17:26.092 Error Recovery Timeout: Unlimited 00:17:26.092 Command Set Identifier: NVM (00h) 00:17:26.092 Deallocate: Supported 00:17:26.092 Deallocated/Unwritten Error: Not Supported 00:17:26.092 Deallocated Read Value: Unknown 00:17:26.092 Deallocate in Write Zeroes: Not Supported 00:17:26.092 Deallocated Guard Field: 0xFFFF 00:17:26.092 Flush: Supported 00:17:26.092 Reservation: Supported 00:17:26.092 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.092 Size (in LBAs): 131072 (0GiB) 00:17:26.092 Capacity (in LBAs): 131072 (0GiB) 00:17:26.092 Utilization (in LBAs): 131072 (0GiB) 00:17:26.092 NGUID: CBC1A57DA90E4FB3A20D4BBE99AAAD0B 00:17:26.092 UUID: cbc1a57d-a90e-4fb3-a20d-4bbe99aaad0b 00:17:26.092 Thin Provisioning: Not Supported 00:17:26.092 Per-NS Atomic Units: Yes 00:17:26.092 Atomic Boundary Size (Normal): 0 00:17:26.092 Atomic Boundary Size (PFail): 0 00:17:26.092 Atomic Boundary Offset: 0 00:17:26.092 Maximum Single Source Range Length: 65535 00:17:26.092 Maximum Copy Length: 65535 00:17:26.092 Maximum Source Range Count: 1 00:17:26.092 NGUID/EUI64 Never Reused: No 00:17:26.092 Namespace Write Protected: No 00:17:26.092 Number of LBA Formats: 1 00:17:26.092 Current LBA Format: LBA Format #00 00:17:26.092 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.092 00:17:26.092 10:33:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:26.092 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.353 [2024-07-22 10:33:31.792035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:31.636 Initializing NVMe Controllers 00:17:31.636 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:31.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:31.636 Initialization complete. Launching workers. 00:17:31.636 ======================================================== 00:17:31.636 Latency(us) 00:17:31.636 Device Information : IOPS MiB/s Average min max 00:17:31.636 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40015.20 156.31 3198.67 832.77 6856.58 00:17:31.636 ======================================================== 00:17:31.636 Total : 40015.20 156.31 3198.67 832.77 6856.58 00:17:31.636 00:17:31.636 [2024-07-22 10:33:36.812314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:31.636 10:33:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:31.636 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.636 [2024-07-22 10:33:36.988142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.923 Initializing NVMe Controllers 00:17:36.923 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:36.923 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:36.923 Initialization complete. Launching workers. 00:17:36.923 ======================================================== 00:17:36.923 Latency(us) 00:17:36.923 Device Information : IOPS MiB/s Average min max 00:17:36.923 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16043.99 62.67 7977.55 5986.30 8979.88 00:17:36.923 ======================================================== 00:17:36.923 Total : 16043.99 62.67 7977.55 5986.30 8979.88 00:17:36.923 00:17:36.923 [2024-07-22 10:33:42.020473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.923 10:33:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:36.923 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.923 [2024-07-22 10:33:42.222384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:42.216 [2024-07-22 10:33:47.298593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:42.216 Initializing NVMe Controllers 00:17:42.216 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:42.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:42.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:42.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:42.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:42.216 Initialization complete. Launching workers. 00:17:42.216 Starting thread on core 2 00:17:42.216 Starting thread on core 3 00:17:42.216 Starting thread on core 1 00:17:42.216 10:33:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:42.216 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.216 [2024-07-22 10:33:47.559372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.511 [2024-07-22 10:33:50.628162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:45.511 Initializing NVMe Controllers 00:17:45.511 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.511 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.511 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:45.511 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:45.511 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:45.511 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:45.511 Initialization complete. Launching workers. 00:17:45.511 Starting thread on core 1 with urgent priority queue 00:17:45.511 Starting thread on core 2 with urgent priority queue 00:17:45.511 Starting thread on core 3 with urgent priority queue 00:17:45.511 Starting thread on core 0 with urgent priority queue 00:17:45.511 SPDK bdev Controller (SPDK1 ) core 0: 8023.67 IO/s 12.46 secs/100000 ios 00:17:45.511 SPDK bdev Controller (SPDK1 ) core 1: 15045.33 IO/s 6.65 secs/100000 ios 00:17:45.511 SPDK bdev Controller (SPDK1 ) core 2: 7968.00 IO/s 12.55 secs/100000 ios 00:17:45.511 SPDK bdev Controller (SPDK1 ) core 3: 16147.67 IO/s 6.19 secs/100000 ios 00:17:45.511 ======================================================== 00:17:45.511 00:17:45.511 10:33:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:45.511 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.511 [2024-07-22 10:33:50.897833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.511 Initializing NVMe Controllers 00:17:45.511 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.511 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.511 Namespace ID: 1 size: 0GB 00:17:45.511 Initialization complete. 00:17:45.511 INFO: using host memory buffer for IO 00:17:45.511 Hello world! 00:17:45.511 [2024-07-22 10:33:50.935061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:45.511 10:33:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:45.511 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.511 [2024-07-22 10:33:51.206836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:46.893 Initializing NVMe Controllers 00:17:46.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.893 Initialization complete. Launching workers. 00:17:46.893 submit (in ns) avg, min, max = 7338.8, 3901.7, 4000954.2 00:17:46.893 complete (in ns) avg, min, max = 19523.3, 2399.2, 3999911.7 00:17:46.893 00:17:46.893 Submit histogram 00:17:46.893 ================ 00:17:46.893 Range in us Cumulative Count 00:17:46.893 3.893 - 3.920: 0.6469% ( 126) 00:17:46.893 3.920 - 3.947: 5.7963% ( 1003) 00:17:46.893 3.947 - 3.973: 15.3661% ( 1864) 00:17:46.893 3.973 - 4.000: 27.2564% ( 2316) 00:17:46.893 4.000 - 4.027: 39.5010% ( 2385) 00:17:46.893 4.027 - 4.053: 53.2190% ( 2672) 00:17:46.893 4.053 - 4.080: 71.1367% ( 3490) 00:17:46.893 4.080 - 4.107: 84.9728% ( 2695) 00:17:46.893 4.107 - 4.133: 93.3001% ( 1622) 00:17:46.893 4.133 - 4.160: 97.3714% ( 793) 00:17:46.893 4.160 - 4.187: 98.8757% ( 293) 00:17:46.893 4.187 - 4.213: 99.3326% ( 89) 00:17:46.893 4.213 - 4.240: 99.4507% ( 23) 00:17:46.893 4.240 - 4.267: 99.4712% ( 4) 00:17:46.893 4.267 - 4.293: 99.4815% ( 2) 00:17:46.893 4.373 - 4.400: 99.4917% ( 2) 00:17:46.893 4.560 - 4.587: 99.4969% ( 1) 00:17:46.893 4.747 - 4.773: 99.5020% ( 1) 00:17:46.893 4.853 - 4.880: 99.5071% ( 1) 00:17:46.893 4.960 - 4.987: 99.5123% ( 1) 00:17:46.893 4.987 - 5.013: 99.5174% ( 1) 00:17:46.893 5.093 - 5.120: 99.5225% ( 1) 00:17:46.893 5.200 - 5.227: 99.5328% ( 2) 00:17:46.893 5.280 - 5.307: 99.5379% ( 1) 00:17:46.893 5.387 - 5.413: 99.5431% ( 1) 00:17:46.893 5.680 - 5.707: 99.5482% ( 1) 00:17:46.893 5.920 - 5.947: 99.5533% ( 1) 00:17:46.893 6.107 - 6.133: 99.5585% ( 1) 00:17:46.893 6.133 - 6.160: 99.5636% ( 1) 00:17:46.893 6.187 - 6.213: 99.5739% ( 2) 00:17:46.893 6.693 - 6.720: 99.5790% ( 1) 00:17:46.893 6.933 - 6.987: 99.5841% ( 1) 00:17:46.893 6.987 - 7.040: 99.5893% ( 1) 00:17:46.893 7.040 - 7.093: 99.5944% ( 1) 00:17:46.893 7.093 - 7.147: 99.5995% ( 1) 00:17:46.893 7.147 - 7.200: 99.6098% ( 2) 00:17:46.893 7.200 - 7.253: 99.6150% ( 1) 00:17:46.893 7.253 - 7.307: 99.6252% ( 2) 00:17:46.893 7.307 - 7.360: 99.6304% ( 1) 00:17:46.893 7.360 - 7.413: 99.6458% ( 3) 00:17:46.893 7.413 - 7.467: 99.6560% ( 2) 00:17:46.893 7.467 - 7.520: 99.6663% ( 2) 00:17:46.893 7.520 - 7.573: 99.6920% ( 5) 00:17:46.893 7.573 - 7.627: 99.7022% ( 2) 00:17:46.893 7.627 - 7.680: 99.7176% ( 3) 00:17:46.893 7.787 - 7.840: 99.7536% ( 7) 00:17:46.893 7.893 - 7.947: 99.7690% ( 3) 00:17:46.893 7.947 - 8.000: 99.7895% ( 4) 00:17:46.893 8.000 - 8.053: 99.8049% ( 3) 00:17:46.893 8.053 - 8.107: 99.8152% ( 2) 00:17:46.893 8.107 - 8.160: 99.8254% ( 2) 00:17:46.893 8.213 - 8.267: 99.8408% ( 3) 00:17:46.893 8.320 - 8.373: 99.8511% ( 2) 00:17:46.893 8.373 - 8.427: 99.8562% ( 1) 00:17:46.894 8.427 - 8.480: 99.8665% ( 2) 00:17:46.894 8.533 - 8.587: 99.8717% ( 1) 00:17:46.894 8.640 - 8.693: 99.8768% ( 1) 00:17:46.894 8.747 - 8.800: 99.8819% ( 1) 00:17:46.894 8.960 - 9.013: 99.8871% ( 1) 00:17:46.894 9.067 - 9.120: 99.8922% ( 1) 00:17:46.894 9.120 - 9.173: 99.8973% ( 1) 00:17:46.894 9.280 - 9.333: 99.9025% ( 1) 00:17:46.894 9.547 - 9.600: 99.9076% ( 1) 00:17:46.894 10.613 - 10.667: 99.9127% ( 1) 00:17:46.894 10.987 - 11.040: 99.9179% ( 1) 00:17:46.894 3986.773 - 4014.080: 100.0000% ( 16) 00:17:46.894 00:17:46.894 Complete histogram 00:17:46.894 ================== 00:17:46.894 Range in us Cumulative Count 00:17:46.894 2.387 - 2.400: 0.0154% ( 3) 00:17:46.894 2.400 - 2.413: 0.9087% ( 174) 00:17:46.894 2.413 - 2.427: 1.1346% ( 44) 00:17:46.894 2.427 - 2.440: 1.2527% ( 23) 00:17:46.894 2.440 - 2.453: 3.0034% ( 341) 00:17:46.894 2.453 - 2.467: 52.9983% ( 9738) 00:17:46.894 2.467 - 2.480: 63.1482% ( 1977) 00:17:46.894 2.480 - 2.493: 75.7521% ( 2455) 00:17:46.894 2.493 - 2.507: 80.4703% ( 919) 00:17:46.894 2.507 - 2.520: 81.8719% ( 273) 00:17:46.894 2.520 - 2.533: 86.1947% ( 842) 00:17:46.894 2.533 - [2024-07-22 10:33:52.229247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.894 2.547: 92.7303% ( 1273) 00:17:46.894 2.547 - 2.560: 96.1598% ( 668) 00:17:46.894 2.560 - 2.573: 98.0799% ( 374) 00:17:46.894 2.573 - 2.587: 98.9629% ( 172) 00:17:46.894 2.587 - 2.600: 99.1991% ( 46) 00:17:46.894 2.600 - 2.613: 99.2864% ( 17) 00:17:46.894 2.613 - 2.627: 99.3018% ( 3) 00:17:46.894 5.200 - 5.227: 99.3069% ( 1) 00:17:46.894 5.333 - 5.360: 99.3120% ( 1) 00:17:46.894 5.520 - 5.547: 99.3274% ( 3) 00:17:46.894 5.653 - 5.680: 99.3326% ( 1) 00:17:46.894 5.707 - 5.733: 99.3428% ( 2) 00:17:46.894 5.733 - 5.760: 99.3480% ( 1) 00:17:46.894 5.787 - 5.813: 99.3583% ( 2) 00:17:46.894 5.813 - 5.840: 99.3685% ( 2) 00:17:46.894 5.947 - 5.973: 99.3737% ( 1) 00:17:46.894 5.973 - 6.000: 99.3788% ( 1) 00:17:46.894 6.000 - 6.027: 99.3839% ( 1) 00:17:46.894 6.053 - 6.080: 99.3891% ( 1) 00:17:46.894 6.080 - 6.107: 99.3942% ( 1) 00:17:46.894 6.160 - 6.187: 99.3993% ( 1) 00:17:46.894 6.213 - 6.240: 99.4096% ( 2) 00:17:46.894 6.267 - 6.293: 99.4147% ( 1) 00:17:46.894 6.427 - 6.453: 99.4199% ( 1) 00:17:46.894 6.453 - 6.480: 99.4250% ( 1) 00:17:46.894 6.560 - 6.587: 99.4353% ( 2) 00:17:46.894 6.613 - 6.640: 99.4558% ( 4) 00:17:46.894 6.827 - 6.880: 99.4609% ( 1) 00:17:46.894 6.933 - 6.987: 99.4763% ( 3) 00:17:46.894 7.040 - 7.093: 99.4917% ( 3) 00:17:46.894 7.200 - 7.253: 99.4969% ( 1) 00:17:46.894 7.360 - 7.413: 99.5123% ( 3) 00:17:46.894 7.413 - 7.467: 99.5174% ( 1) 00:17:46.894 7.467 - 7.520: 99.5225% ( 1) 00:17:46.894 7.787 - 7.840: 99.5277% ( 1) 00:17:46.894 8.320 - 8.373: 99.5328% ( 1) 00:17:46.894 8.533 - 8.587: 99.5379% ( 1) 00:17:46.894 12.000 - 12.053: 99.5431% ( 1) 00:17:46.894 13.653 - 13.760: 99.5482% ( 1) 00:17:46.894 14.400 - 14.507: 99.5533% ( 1) 00:17:46.894 14.613 - 14.720: 99.5585% ( 1) 00:17:46.894 17.600 - 17.707: 99.5636% ( 1) 00:17:46.894 29.013 - 29.227: 99.5687% ( 1) 00:17:46.894 72.960 - 73.387: 99.5739% ( 1) 00:17:46.894 3986.773 - 4014.080: 100.0000% ( 83) 00:17:46.894 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:46.894 [ 00:17:46.894 { 00:17:46.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.894 "subtype": "Discovery", 00:17:46.894 "listen_addresses": [], 00:17:46.894 "allow_any_host": true, 00:17:46.894 "hosts": [] 00:17:46.894 }, 00:17:46.894 { 00:17:46.894 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:46.894 "subtype": "NVMe", 00:17:46.894 "listen_addresses": [ 00:17:46.894 { 00:17:46.894 "trtype": "VFIOUSER", 00:17:46.894 "adrfam": "IPv4", 00:17:46.894 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:46.894 "trsvcid": "0" 00:17:46.894 } 00:17:46.894 ], 00:17:46.894 "allow_any_host": true, 00:17:46.894 "hosts": [], 00:17:46.894 "serial_number": "SPDK1", 00:17:46.894 "model_number": "SPDK bdev Controller", 00:17:46.894 "max_namespaces": 32, 00:17:46.894 "min_cntlid": 1, 00:17:46.894 "max_cntlid": 65519, 00:17:46.894 "namespaces": [ 00:17:46.894 { 00:17:46.894 "nsid": 1, 00:17:46.894 "bdev_name": "Malloc1", 00:17:46.894 "name": "Malloc1", 00:17:46.894 "nguid": "CBC1A57DA90E4FB3A20D4BBE99AAAD0B", 00:17:46.894 "uuid": "cbc1a57d-a90e-4fb3-a20d-4bbe99aaad0b" 00:17:46.894 } 00:17:46.894 ] 00:17:46.894 }, 00:17:46.894 { 00:17:46.894 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:46.894 "subtype": "NVMe", 00:17:46.894 "listen_addresses": [ 00:17:46.894 { 00:17:46.894 "trtype": "VFIOUSER", 00:17:46.894 "adrfam": "IPv4", 00:17:46.894 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:46.894 "trsvcid": "0" 00:17:46.894 } 00:17:46.894 ], 00:17:46.894 "allow_any_host": true, 00:17:46.894 "hosts": [], 00:17:46.894 "serial_number": "SPDK2", 00:17:46.894 "model_number": "SPDK bdev Controller", 00:17:46.894 "max_namespaces": 32, 00:17:46.894 "min_cntlid": 1, 00:17:46.894 "max_cntlid": 65519, 00:17:46.894 "namespaces": [ 00:17:46.894 { 00:17:46.894 "nsid": 1, 00:17:46.894 "bdev_name": "Malloc2", 00:17:46.894 "name": "Malloc2", 00:17:46.894 "nguid": "AAEB6F5200FF4C3B8375B983DAB715AE", 00:17:46.894 "uuid": "aaeb6f52-00ff-4c3b-8375-b983dab715ae" 00:17:46.894 } 00:17:46.894 ] 00:17:46.894 } 00:17:46.894 ] 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1909593 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:46.894 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:46.894 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.154 Malloc3 00:17:47.154 [2024-07-22 10:33:52.619906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.154 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:47.154 [2024-07-22 10:33:52.789027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.154 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.154 Asynchronous Event Request test 00:17:47.154 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:47.154 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:47.154 Registering asynchronous event callbacks... 00:17:47.154 Starting namespace attribute notice tests for all controllers... 00:17:47.154 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:47.154 aer_cb - Changed Namespace 00:17:47.154 Cleaning up... 00:17:47.415 [ 00:17:47.415 { 00:17:47.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.415 "subtype": "Discovery", 00:17:47.415 "listen_addresses": [], 00:17:47.415 "allow_any_host": true, 00:17:47.415 "hosts": [] 00:17:47.415 }, 00:17:47.415 { 00:17:47.415 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.415 "subtype": "NVMe", 00:17:47.415 "listen_addresses": [ 00:17:47.415 { 00:17:47.415 "trtype": "VFIOUSER", 00:17:47.415 "adrfam": "IPv4", 00:17:47.415 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.415 "trsvcid": "0" 00:17:47.415 } 00:17:47.415 ], 00:17:47.415 "allow_any_host": true, 00:17:47.415 "hosts": [], 00:17:47.415 "serial_number": "SPDK1", 00:17:47.415 "model_number": "SPDK bdev Controller", 00:17:47.415 "max_namespaces": 32, 00:17:47.415 "min_cntlid": 1, 00:17:47.415 "max_cntlid": 65519, 00:17:47.415 "namespaces": [ 00:17:47.415 { 00:17:47.415 "nsid": 1, 00:17:47.415 "bdev_name": "Malloc1", 00:17:47.415 "name": "Malloc1", 00:17:47.415 "nguid": "CBC1A57DA90E4FB3A20D4BBE99AAAD0B", 00:17:47.415 "uuid": "cbc1a57d-a90e-4fb3-a20d-4bbe99aaad0b" 00:17:47.415 }, 00:17:47.415 { 00:17:47.415 "nsid": 2, 00:17:47.415 "bdev_name": "Malloc3", 00:17:47.415 "name": "Malloc3", 00:17:47.415 "nguid": "3684D206814A49D8B04088362A2C43AA", 00:17:47.415 "uuid": "3684d206-814a-49d8-b040-88362a2c43aa" 00:17:47.415 } 00:17:47.415 ] 00:17:47.415 }, 00:17:47.415 { 00:17:47.415 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.415 "subtype": "NVMe", 00:17:47.415 "listen_addresses": [ 00:17:47.415 { 00:17:47.415 "trtype": "VFIOUSER", 00:17:47.415 "adrfam": "IPv4", 00:17:47.415 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.415 "trsvcid": "0" 00:17:47.415 } 00:17:47.415 ], 00:17:47.415 "allow_any_host": true, 00:17:47.415 "hosts": [], 00:17:47.415 "serial_number": "SPDK2", 00:17:47.415 "model_number": "SPDK bdev Controller", 00:17:47.415 "max_namespaces": 32, 00:17:47.415 "min_cntlid": 1, 00:17:47.415 "max_cntlid": 65519, 00:17:47.415 "namespaces": [ 00:17:47.415 { 00:17:47.415 "nsid": 1, 00:17:47.415 "bdev_name": "Malloc2", 00:17:47.415 "name": "Malloc2", 00:17:47.415 "nguid": "AAEB6F5200FF4C3B8375B983DAB715AE", 00:17:47.415 "uuid": "aaeb6f52-00ff-4c3b-8375-b983dab715ae" 00:17:47.415 } 00:17:47.415 ] 00:17:47.415 } 00:17:47.415 ] 00:17:47.415 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1909593 00:17:47.415 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.415 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:47.415 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:47.415 10:33:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.415 [2024-07-22 10:33:53.014968] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:17:47.415 [2024-07-22 10:33:53.015023] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909835 ] 00:17:47.415 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.415 [2024-07-22 10:33:53.047955] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:47.415 [2024-07-22 10:33:53.056643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.415 [2024-07-22 10:33:53.056664] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe73fcff000 00:17:47.415 [2024-07-22 10:33:53.057641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.058647] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.059657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.060661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.061664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.062668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.063679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.064680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.415 [2024-07-22 10:33:53.065686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.415 [2024-07-22 10:33:53.065697] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe73eac4000 00:17:47.415 [2024-07-22 10:33:53.067028] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.415 [2024-07-22 10:33:53.083237] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:47.415 [2024-07-22 10:33:53.083259] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:47.415 [2024-07-22 10:33:53.088347] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:47.415 [2024-07-22 10:33:53.088390] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.415 [2024-07-22 10:33:53.088477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:47.415 [2024-07-22 10:33:53.088492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:47.415 [2024-07-22 10:33:53.088498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:47.415 [2024-07-22 10:33:53.089348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:47.415 [2024-07-22 10:33:53.089359] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:47.415 [2024-07-22 10:33:53.089366] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:47.415 [2024-07-22 10:33:53.090356] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:47.415 [2024-07-22 10:33:53.090368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:47.415 [2024-07-22 10:33:53.090375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.091360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:47.415 [2024-07-22 10:33:53.091370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.092370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:47.415 [2024-07-22 10:33:53.092378] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:47.415 [2024-07-22 10:33:53.092383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.092390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.092498] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:47.415 [2024-07-22 10:33:53.092504] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.092508] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:47.415 [2024-07-22 10:33:53.093381] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:47.415 [2024-07-22 10:33:53.094384] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:47.415 [2024-07-22 10:33:53.095390] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:47.415 [2024-07-22 10:33:53.096389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.415 [2024-07-22 10:33:53.096435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.415 [2024-07-22 10:33:53.097403] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:47.415 [2024-07-22 10:33:53.097411] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.415 [2024-07-22 10:33:53.097416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:47.415 [2024-07-22 10:33:53.097437] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:47.415 [2024-07-22 10:33:53.097447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.415 [2024-07-22 10:33:53.097459] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.415 [2024-07-22 10:33:53.097464] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.415 [2024-07-22 10:33:53.097468] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.416 [2024-07-22 10:33:53.097480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.416 [2024-07-22 10:33:53.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.416 [2024-07-22 10:33:53.106420] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:47.416 [2024-07-22 10:33:53.106425] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:47.416 [2024-07-22 10:33:53.106429] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:47.416 [2024-07-22 10:33:53.106434] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.416 [2024-07-22 10:33:53.106438] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:47.416 [2024-07-22 10:33:53.106443] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:47.416 [2024-07-22 10:33:53.106447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:47.416 [2024-07-22 10:33:53.106455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.416 [2024-07-22 10:33:53.106465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.114404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.114420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.677 [2024-07-22 10:33:53.114429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.677 [2024-07-22 10:33:53.114437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.677 [2024-07-22 10:33:53.114445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.677 [2024-07-22 10:33:53.114450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.114459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.114468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.122400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.122409] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:47.677 [2024-07-22 10:33:53.122414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.122423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.122429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.122437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.130401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.130466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.130474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.130481] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.677 [2024-07-22 10:33:53.130486] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.677 [2024-07-22 10:33:53.130489] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.677 [2024-07-22 10:33:53.130496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.138402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.138413] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:47.677 [2024-07-22 10:33:53.138425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.138433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.138440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.677 [2024-07-22 10:33:53.138444] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.677 [2024-07-22 10:33:53.138448] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.677 [2024-07-22 10:33:53.138454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.146400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.146414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.146422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.146429] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.677 [2024-07-22 10:33:53.146434] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.677 [2024-07-22 10:33:53.146437] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.677 [2024-07-22 10:33:53.146443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.154401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.154413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154449] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.677 [2024-07-22 10:33:53.154453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:47.677 [2024-07-22 10:33:53.154458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:47.677 [2024-07-22 10:33:53.154475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.677 [2024-07-22 10:33:53.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.677 [2024-07-22 10:33:53.162413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.170402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.170415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.178401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.178414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.186399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.186416] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.678 [2024-07-22 10:33:53.186420] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.678 [2024-07-22 10:33:53.186424] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.678 [2024-07-22 10:33:53.186428] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.678 [2024-07-22 10:33:53.186431] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.678 [2024-07-22 10:33:53.186437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.678 [2024-07-22 10:33:53.186445] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.678 [2024-07-22 10:33:53.186449] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.678 [2024-07-22 10:33:53.186453] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.678 [2024-07-22 10:33:53.186459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.186468] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.678 [2024-07-22 10:33:53.186472] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.678 [2024-07-22 10:33:53.186475] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.678 [2024-07-22 10:33:53.186481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.186489] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.678 [2024-07-22 10:33:53.186493] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.678 [2024-07-22 10:33:53.186497] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.678 [2024-07-22 10:33:53.186502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.678 [2024-07-22 10:33:53.194401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.194426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.194437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.678 [2024-07-22 10:33:53.194444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.678 ===================================================== 00:17:47.678 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.678 ===================================================== 00:17:47.678 Controller Capabilities/Features 00:17:47.678 ================================ 00:17:47.678 Vendor ID: 4e58 00:17:47.678 Subsystem Vendor ID: 4e58 00:17:47.678 Serial Number: SPDK2 00:17:47.678 Model Number: SPDK bdev Controller 00:17:47.678 Firmware Version: 24.09 00:17:47.678 Recommended Arb Burst: 6 00:17:47.678 IEEE OUI Identifier: 8d 6b 50 00:17:47.678 Multi-path I/O 00:17:47.678 May have multiple subsystem ports: Yes 00:17:47.678 May have multiple controllers: Yes 00:17:47.678 Associated with SR-IOV VF: No 00:17:47.678 Max Data Transfer Size: 131072 00:17:47.678 Max Number of Namespaces: 32 00:17:47.678 Max Number of I/O Queues: 127 00:17:47.678 NVMe Specification Version (VS): 1.3 00:17:47.678 NVMe Specification Version (Identify): 1.3 00:17:47.678 Maximum Queue Entries: 256 00:17:47.678 Contiguous Queues Required: Yes 00:17:47.678 Arbitration Mechanisms Supported 00:17:47.678 Weighted Round Robin: Not Supported 00:17:47.678 Vendor Specific: Not Supported 00:17:47.678 Reset Timeout: 15000 ms 00:17:47.678 Doorbell Stride: 4 bytes 00:17:47.678 NVM Subsystem Reset: Not Supported 00:17:47.678 Command Sets Supported 00:17:47.678 NVM Command Set: Supported 00:17:47.678 Boot Partition: Not Supported 00:17:47.678 Memory Page Size Minimum: 4096 bytes 00:17:47.678 Memory Page Size Maximum: 4096 bytes 00:17:47.678 Persistent Memory Region: Not Supported 00:17:47.678 Optional Asynchronous Events Supported 00:17:47.678 Namespace Attribute Notices: Supported 00:17:47.678 Firmware Activation Notices: Not Supported 00:17:47.678 ANA Change Notices: Not Supported 00:17:47.678 PLE Aggregate Log Change Notices: Not Supported 00:17:47.678 LBA Status Info Alert Notices: Not Supported 00:17:47.678 EGE Aggregate Log Change Notices: Not Supported 00:17:47.678 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.678 Zone Descriptor Change Notices: Not Supported 00:17:47.678 Discovery Log Change Notices: Not Supported 00:17:47.678 Controller Attributes 00:17:47.678 128-bit Host Identifier: Supported 00:17:47.678 Non-Operational Permissive Mode: Not Supported 00:17:47.678 NVM Sets: Not Supported 00:17:47.678 Read Recovery Levels: Not Supported 00:17:47.678 Endurance Groups: Not Supported 00:17:47.678 Predictable Latency Mode: Not Supported 00:17:47.678 Traffic Based Keep ALive: Not Supported 00:17:47.678 Namespace Granularity: Not Supported 00:17:47.678 SQ Associations: Not Supported 00:17:47.678 UUID List: Not Supported 00:17:47.678 Multi-Domain Subsystem: Not Supported 00:17:47.678 Fixed Capacity Management: Not Supported 00:17:47.678 Variable Capacity Management: Not Supported 00:17:47.678 Delete Endurance Group: Not Supported 00:17:47.678 Delete NVM Set: Not Supported 00:17:47.678 Extended LBA Formats Supported: Not Supported 00:17:47.678 Flexible Data Placement Supported: Not Supported 00:17:47.678 00:17:47.678 Controller Memory Buffer Support 00:17:47.678 ================================ 00:17:47.678 Supported: No 00:17:47.678 00:17:47.678 Persistent Memory Region Support 00:17:47.678 ================================ 00:17:47.678 Supported: No 00:17:47.678 00:17:47.678 Admin Command Set Attributes 00:17:47.678 ============================ 00:17:47.678 Security Send/Receive: Not Supported 00:17:47.678 Format NVM: Not Supported 00:17:47.678 Firmware Activate/Download: Not Supported 00:17:47.678 Namespace Management: Not Supported 00:17:47.678 Device Self-Test: Not Supported 00:17:47.678 Directives: Not Supported 00:17:47.678 NVMe-MI: Not Supported 00:17:47.678 Virtualization Management: Not Supported 00:17:47.678 Doorbell Buffer Config: Not Supported 00:17:47.678 Get LBA Status Capability: Not Supported 00:17:47.678 Command & Feature Lockdown Capability: Not Supported 00:17:47.678 Abort Command Limit: 4 00:17:47.678 Async Event Request Limit: 4 00:17:47.678 Number of Firmware Slots: N/A 00:17:47.678 Firmware Slot 1 Read-Only: N/A 00:17:47.678 Firmware Activation Without Reset: N/A 00:17:47.678 Multiple Update Detection Support: N/A 00:17:47.678 Firmware Update Granularity: No Information Provided 00:17:47.678 Per-Namespace SMART Log: No 00:17:47.678 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.678 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:47.678 Command Effects Log Page: Supported 00:17:47.678 Get Log Page Extended Data: Supported 00:17:47.678 Telemetry Log Pages: Not Supported 00:17:47.678 Persistent Event Log Pages: Not Supported 00:17:47.678 Supported Log Pages Log Page: May Support 00:17:47.678 Commands Supported & Effects Log Page: Not Supported 00:17:47.678 Feature Identifiers & Effects Log Page:May Support 00:17:47.678 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.678 Data Area 4 for Telemetry Log: Not Supported 00:17:47.678 Error Log Page Entries Supported: 128 00:17:47.678 Keep Alive: Supported 00:17:47.678 Keep Alive Granularity: 10000 ms 00:17:47.678 00:17:47.678 NVM Command Set Attributes 00:17:47.678 ========================== 00:17:47.678 Submission Queue Entry Size 00:17:47.678 Max: 64 00:17:47.678 Min: 64 00:17:47.678 Completion Queue Entry Size 00:17:47.678 Max: 16 00:17:47.678 Min: 16 00:17:47.678 Number of Namespaces: 32 00:17:47.678 Compare Command: Supported 00:17:47.678 Write Uncorrectable Command: Not Supported 00:17:47.678 Dataset Management Command: Supported 00:17:47.678 Write Zeroes Command: Supported 00:17:47.678 Set Features Save Field: Not Supported 00:17:47.678 Reservations: Not Supported 00:17:47.678 Timestamp: Not Supported 00:17:47.678 Copy: Supported 00:17:47.678 Volatile Write Cache: Present 00:17:47.678 Atomic Write Unit (Normal): 1 00:17:47.678 Atomic Write Unit (PFail): 1 00:17:47.678 Atomic Compare & Write Unit: 1 00:17:47.678 Fused Compare & Write: Supported 00:17:47.678 Scatter-Gather List 00:17:47.678 SGL Command Set: Supported (Dword aligned) 00:17:47.678 SGL Keyed: Not Supported 00:17:47.678 SGL Bit Bucket Descriptor: Not Supported 00:17:47.678 SGL Metadata Pointer: Not Supported 00:17:47.678 Oversized SGL: Not Supported 00:17:47.678 SGL Metadata Address: Not Supported 00:17:47.678 SGL Offset: Not Supported 00:17:47.678 Transport SGL Data Block: Not Supported 00:17:47.678 Replay Protected Memory Block: Not Supported 00:17:47.678 00:17:47.678 Firmware Slot Information 00:17:47.678 ========================= 00:17:47.678 Active slot: 1 00:17:47.678 Slot 1 Firmware Revision: 24.09 00:17:47.678 00:17:47.678 00:17:47.678 Commands Supported and Effects 00:17:47.678 ============================== 00:17:47.678 Admin Commands 00:17:47.678 -------------- 00:17:47.678 Get Log Page (02h): Supported 00:17:47.678 Identify (06h): Supported 00:17:47.678 Abort (08h): Supported 00:17:47.678 Set Features (09h): Supported 00:17:47.678 Get Features (0Ah): Supported 00:17:47.678 Asynchronous Event Request (0Ch): Supported 00:17:47.678 Keep Alive (18h): Supported 00:17:47.678 I/O Commands 00:17:47.678 ------------ 00:17:47.678 Flush (00h): Supported LBA-Change 00:17:47.678 Write (01h): Supported LBA-Change 00:17:47.678 Read (02h): Supported 00:17:47.678 Compare (05h): Supported 00:17:47.678 Write Zeroes (08h): Supported LBA-Change 00:17:47.678 Dataset Management (09h): Supported LBA-Change 00:17:47.678 Copy (19h): Supported LBA-Change 00:17:47.678 00:17:47.678 Error Log 00:17:47.678 ========= 00:17:47.678 00:17:47.679 Arbitration 00:17:47.679 =========== 00:17:47.679 Arbitration Burst: 1 00:17:47.679 00:17:47.679 Power Management 00:17:47.679 ================ 00:17:47.679 Number of Power States: 1 00:17:47.679 Current Power State: Power State #0 00:17:47.679 Power State #0: 00:17:47.679 Max Power: 0.00 W 00:17:47.679 Non-Operational State: Operational 00:17:47.679 Entry Latency: Not Reported 00:17:47.679 Exit Latency: Not Reported 00:17:47.679 Relative Read Throughput: 0 00:17:47.679 Relative Read Latency: 0 00:17:47.679 Relative Write Throughput: 0 00:17:47.679 Relative Write Latency: 0 00:17:47.679 Idle Power: Not Reported 00:17:47.679 Active Power: Not Reported 00:17:47.679 Non-Operational Permissive Mode: Not Supported 00:17:47.679 00:17:47.679 Health Information 00:17:47.679 ================== 00:17:47.679 Critical Warnings: 00:17:47.679 Available Spare Space: OK 00:17:47.679 Temperature: OK 00:17:47.679 Device Reliability: OK 00:17:47.679 Read Only: No 00:17:47.679 Volatile Memory Backup: OK 00:17:47.679 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.679 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.679 Available Spare: 0% 00:17:47.679 Available Sp[2024-07-22 10:33:53.194546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.679 [2024-07-22 10:33:53.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.679 [2024-07-22 10:33:53.202433] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:47.679 [2024-07-22 10:33:53.202442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.679 [2024-07-22 10:33:53.202449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.679 [2024-07-22 10:33:53.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.679 [2024-07-22 10:33:53.202461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.679 [2024-07-22 10:33:53.202500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:47.679 [2024-07-22 10:33:53.202510] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:47.679 [2024-07-22 10:33:53.203506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.679 [2024-07-22 10:33:53.203553] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:47.679 [2024-07-22 10:33:53.203560] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:47.679 [2024-07-22 10:33:53.204512] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:47.679 [2024-07-22 10:33:53.204523] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:47.679 [2024-07-22 10:33:53.204571] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:47.679 [2024-07-22 10:33:53.205949] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.679 are Threshold: 0% 00:17:47.679 Life Percentage Used: 0% 00:17:47.679 Data Units Read: 0 00:17:47.679 Data Units Written: 0 00:17:47.679 Host Read Commands: 0 00:17:47.679 Host Write Commands: 0 00:17:47.679 Controller Busy Time: 0 minutes 00:17:47.679 Power Cycles: 0 00:17:47.679 Power On Hours: 0 hours 00:17:47.679 Unsafe Shutdowns: 0 00:17:47.679 Unrecoverable Media Errors: 0 00:17:47.679 Lifetime Error Log Entries: 0 00:17:47.679 Warning Temperature Time: 0 minutes 00:17:47.679 Critical Temperature Time: 0 minutes 00:17:47.679 00:17:47.679 Number of Queues 00:17:47.679 ================ 00:17:47.679 Number of I/O Submission Queues: 127 00:17:47.679 Number of I/O Completion Queues: 127 00:17:47.679 00:17:47.679 Active Namespaces 00:17:47.679 ================= 00:17:47.679 Namespace ID:1 00:17:47.679 Error Recovery Timeout: Unlimited 00:17:47.679 Command Set Identifier: NVM (00h) 00:17:47.679 Deallocate: Supported 00:17:47.679 Deallocated/Unwritten Error: Not Supported 00:17:47.679 Deallocated Read Value: Unknown 00:17:47.679 Deallocate in Write Zeroes: Not Supported 00:17:47.679 Deallocated Guard Field: 0xFFFF 00:17:47.679 Flush: Supported 00:17:47.679 Reservation: Supported 00:17:47.679 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.679 Size (in LBAs): 131072 (0GiB) 00:17:47.679 Capacity (in LBAs): 131072 (0GiB) 00:17:47.679 Utilization (in LBAs): 131072 (0GiB) 00:17:47.679 NGUID: AAEB6F5200FF4C3B8375B983DAB715AE 00:17:47.679 UUID: aaeb6f52-00ff-4c3b-8375-b983dab715ae 00:17:47.679 Thin Provisioning: Not Supported 00:17:47.679 Per-NS Atomic Units: Yes 00:17:47.679 Atomic Boundary Size (Normal): 0 00:17:47.679 Atomic Boundary Size (PFail): 0 00:17:47.679 Atomic Boundary Offset: 0 00:17:47.679 Maximum Single Source Range Length: 65535 00:17:47.679 Maximum Copy Length: 65535 00:17:47.679 Maximum Source Range Count: 1 00:17:47.679 NGUID/EUI64 Never Reused: No 00:17:47.679 Namespace Write Protected: No 00:17:47.679 Number of LBA Formats: 1 00:17:47.679 Current LBA Format: LBA Format #00 00:17:47.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.679 00:17:47.679 10:33:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:47.679 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.938 [2024-07-22 10:33:53.392427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:53.218 Initializing NVMe Controllers 00:17:53.218 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:53.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:53.218 Initialization complete. Launching workers. 00:17:53.218 ======================================================== 00:17:53.218 Latency(us) 00:17:53.218 Device Information : IOPS MiB/s Average min max 00:17:53.218 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39988.80 156.21 3203.28 832.99 6818.55 00:17:53.218 ======================================================== 00:17:53.218 Total : 39988.80 156.21 3203.28 832.99 6818.55 00:17:53.218 00:17:53.218 [2024-07-22 10:33:58.502575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:53.218 10:33:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:53.218 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.218 [2024-07-22 10:33:58.673124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:58.498 Initializing NVMe Controllers 00:17:58.498 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:58.498 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:58.498 Initialization complete. Launching workers. 00:17:58.498 ======================================================== 00:17:58.498 Latency(us) 00:17:58.498 Device Information : IOPS MiB/s Average min max 00:17:58.498 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35635.10 139.20 3591.48 1107.66 7650.49 00:17:58.498 ======================================================== 00:17:58.498 Total : 35635.10 139.20 3591.48 1107.66 7650.49 00:17:58.498 00:17:58.498 [2024-07-22 10:34:03.691507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:58.498 10:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:58.498 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.498 [2024-07-22 10:34:03.889778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.781 [2024-07-22 10:34:09.018480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.781 Initializing NVMe Controllers 00:18:03.781 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.781 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:03.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:03.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:03.781 Initialization complete. Launching workers. 00:18:03.781 Starting thread on core 2 00:18:03.781 Starting thread on core 3 00:18:03.781 Starting thread on core 1 00:18:03.781 10:34:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:03.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.781 [2024-07-22 10:34:09.279048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:07.071 [2024-07-22 10:34:12.338376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:07.071 Initializing NVMe Controllers 00:18:07.071 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.071 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.071 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:07.071 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:07.071 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:07.071 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:07.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:07.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:07.071 Initialization complete. Launching workers. 00:18:07.071 Starting thread on core 1 with urgent priority queue 00:18:07.071 Starting thread on core 2 with urgent priority queue 00:18:07.071 Starting thread on core 3 with urgent priority queue 00:18:07.071 Starting thread on core 0 with urgent priority queue 00:18:07.071 SPDK bdev Controller (SPDK2 ) core 0: 16850.33 IO/s 5.93 secs/100000 ios 00:18:07.071 SPDK bdev Controller (SPDK2 ) core 1: 9706.67 IO/s 10.30 secs/100000 ios 00:18:07.071 SPDK bdev Controller (SPDK2 ) core 2: 10282.33 IO/s 9.73 secs/100000 ios 00:18:07.071 SPDK bdev Controller (SPDK2 ) core 3: 11798.00 IO/s 8.48 secs/100000 ios 00:18:07.071 ======================================================== 00:18:07.071 00:18:07.072 10:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:07.072 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.072 [2024-07-22 10:34:12.605830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:07.072 Initializing NVMe Controllers 00:18:07.072 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.072 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.072 Namespace ID: 1 size: 0GB 00:18:07.072 Initialization complete. 00:18:07.072 INFO: using host memory buffer for IO 00:18:07.072 Hello world! 00:18:07.072 [2024-07-22 10:34:12.617911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:07.072 10:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:07.072 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.331 [2024-07-22 10:34:12.893893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.709 Initializing NVMe Controllers 00:18:08.709 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.709 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.709 Initialization complete. Launching workers. 00:18:08.709 submit (in ns) avg, min, max = 9439.5, 3922.5, 4001459.2 00:18:08.709 complete (in ns) avg, min, max = 17513.7, 2390.8, 3999859.2 00:18:08.709 00:18:08.709 Submit histogram 00:18:08.709 ================ 00:18:08.709 Range in us Cumulative Count 00:18:08.709 3.920 - 3.947: 1.2935% ( 251) 00:18:08.709 3.947 - 3.973: 6.5708% ( 1024) 00:18:08.709 3.973 - 4.000: 14.6825% ( 1574) 00:18:08.709 4.000 - 4.027: 27.6747% ( 2521) 00:18:08.709 4.027 - 4.053: 40.2597% ( 2442) 00:18:08.709 4.053 - 4.080: 53.5250% ( 2574) 00:18:08.709 4.080 - 4.107: 72.3098% ( 3645) 00:18:08.709 4.107 - 4.133: 85.3896% ( 2538) 00:18:08.709 4.133 - 4.160: 93.5065% ( 1575) 00:18:08.709 4.160 - 4.187: 97.6603% ( 806) 00:18:08.709 4.187 - 4.213: 98.9435% ( 249) 00:18:08.709 4.213 - 4.240: 99.2837% ( 66) 00:18:08.709 4.240 - 4.267: 99.3661% ( 16) 00:18:08.709 4.267 - 4.293: 99.3919% ( 5) 00:18:08.709 4.293 - 4.320: 99.3970% ( 1) 00:18:08.709 4.347 - 4.373: 99.4022% ( 1) 00:18:08.709 4.640 - 4.667: 99.4073% ( 1) 00:18:08.709 4.880 - 4.907: 99.4125% ( 1) 00:18:08.709 4.907 - 4.933: 99.4176% ( 1) 00:18:08.709 4.960 - 4.987: 99.4228% ( 1) 00:18:08.709 5.013 - 5.040: 99.4280% ( 1) 00:18:08.709 5.120 - 5.147: 99.4331% ( 1) 00:18:08.709 5.147 - 5.173: 99.4383% ( 1) 00:18:08.709 5.173 - 5.200: 99.4434% ( 1) 00:18:08.709 5.333 - 5.360: 99.4486% ( 1) 00:18:08.709 5.360 - 5.387: 99.4589% ( 2) 00:18:08.709 6.000 - 6.027: 99.4640% ( 1) 00:18:08.709 6.027 - 6.053: 99.4692% ( 1) 00:18:08.709 6.053 - 6.080: 99.4795% ( 2) 00:18:08.709 6.133 - 6.160: 99.4846% ( 1) 00:18:08.709 6.160 - 6.187: 99.4898% ( 1) 00:18:08.709 6.187 - 6.213: 99.4949% ( 1) 00:18:08.709 6.213 - 6.240: 99.5001% ( 1) 00:18:08.709 6.240 - 6.267: 99.5053% ( 1) 00:18:08.709 6.293 - 6.320: 99.5104% ( 1) 00:18:08.709 6.480 - 6.507: 99.5156% ( 1) 00:18:08.709 6.560 - 6.587: 99.5259% ( 2) 00:18:08.709 6.747 - 6.773: 99.5310% ( 1) 00:18:08.709 6.773 - 6.800: 99.5362% ( 1) 00:18:08.709 6.800 - 6.827: 99.5516% ( 3) 00:18:08.709 6.987 - 7.040: 99.5568% ( 1) 00:18:08.709 7.147 - 7.200: 99.5619% ( 1) 00:18:08.709 7.307 - 7.360: 99.5671% ( 1) 00:18:08.709 7.360 - 7.413: 99.5826% ( 3) 00:18:08.709 7.467 - 7.520: 99.5877% ( 1) 00:18:08.709 7.520 - 7.573: 99.5929% ( 1) 00:18:08.709 7.573 - 7.627: 99.5980% ( 1) 00:18:08.709 7.627 - 7.680: 99.6186% ( 4) 00:18:08.709 7.680 - 7.733: 99.6238% ( 1) 00:18:08.709 7.733 - 7.787: 99.6341% ( 2) 00:18:08.709 7.787 - 7.840: 99.6496% ( 3) 00:18:08.709 7.840 - 7.893: 99.6702% ( 4) 00:18:08.709 7.893 - 7.947: 99.6753% ( 1) 00:18:08.709 7.947 - 8.000: 99.6856% ( 2) 00:18:08.709 8.000 - 8.053: 99.6959% ( 2) 00:18:08.709 8.053 - 8.107: 99.7062% ( 2) 00:18:08.709 8.107 - 8.160: 99.7217% ( 3) 00:18:08.709 8.160 - 8.213: 99.7269% ( 1) 00:18:08.709 8.267 - 8.320: 99.7320% ( 1) 00:18:08.709 8.320 - 8.373: 99.7372% ( 1) 00:18:08.709 8.373 - 8.427: 99.7475% ( 2) 00:18:08.709 8.427 - 8.480: 99.7629% ( 3) 00:18:08.709 8.480 - 8.533: 99.7681% ( 1) 00:18:08.709 8.533 - 8.587: 99.7784% ( 2) 00:18:08.709 8.640 - 8.693: 99.7835% ( 1) 00:18:08.709 8.693 - 8.747: 99.7887% ( 1) 00:18:08.709 8.747 - 8.800: 99.8042% ( 3) 00:18:08.709 8.853 - 8.907: 99.8093% ( 1) 00:18:08.709 8.907 - 8.960: 99.8145% ( 1) 00:18:08.709 8.960 - 9.013: 99.8196% ( 1) 00:18:08.709 9.013 - 9.067: 99.8248% ( 1) 00:18:08.709 9.067 - 9.120: 99.8299% ( 1) 00:18:08.709 9.333 - 9.387: 99.8351% ( 1) 00:18:08.709 12.853 - 12.907: 99.8402% ( 1) 00:18:08.709 13.440 - 13.493: 99.8454% ( 1) 00:18:08.709 14.933 - 15.040: 99.8505% ( 1) 00:18:08.709 16.533 - 16.640: 99.8557% ( 1) 00:18:08.710 20.053 - 20.160: 99.8609% ( 1) 00:18:08.710 20.693 - 20.800: 99.8660% ( 1) 00:18:08.710 [2024-07-22 10:34:13.990095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.710 3986.773 - 4014.080: 100.0000% ( 26) 00:18:08.710 00:18:08.710 Complete histogram 00:18:08.710 ================== 00:18:08.710 Range in us Cumulative Count 00:18:08.710 2.387 - 2.400: 0.0258% ( 5) 00:18:08.710 2.400 - 2.413: 1.0771% ( 204) 00:18:08.710 2.413 - 2.427: 1.1699% ( 18) 00:18:08.710 2.427 - 2.440: 1.2523% ( 16) 00:18:08.710 2.440 - 2.453: 50.2165% ( 9501) 00:18:08.710 2.453 - 2.467: 69.2435% ( 3692) 00:18:08.710 2.467 - 2.480: 77.8190% ( 1664) 00:18:08.710 2.480 - 2.493: 82.6479% ( 937) 00:18:08.710 2.493 - 2.507: 84.1115% ( 284) 00:18:08.710 2.507 - 2.520: 86.8429% ( 530) 00:18:08.710 2.520 - 2.533: 92.3263% ( 1064) 00:18:08.710 2.533 - 2.547: 96.5368% ( 817) 00:18:08.710 2.547 - 2.560: 98.2066% ( 324) 00:18:08.710 2.560 - 2.573: 98.9847% ( 151) 00:18:08.710 2.573 - 2.587: 99.2682% ( 55) 00:18:08.710 2.587 - 2.600: 99.3249% ( 11) 00:18:08.710 2.600 - 2.613: 99.3403% ( 3) 00:18:08.710 2.627 - 2.640: 99.3455% ( 1) 00:18:08.710 2.667 - 2.680: 99.3506% ( 1) 00:18:08.710 2.973 - 2.987: 99.3558% ( 1) 00:18:08.710 3.413 - 3.440: 99.3610% ( 1) 00:18:08.710 4.560 - 4.587: 99.3661% ( 1) 00:18:08.710 4.587 - 4.613: 99.3764% ( 2) 00:18:08.710 4.613 - 4.640: 99.3867% ( 2) 00:18:08.710 4.667 - 4.693: 99.3919% ( 1) 00:18:08.710 4.747 - 4.773: 99.3970% ( 1) 00:18:08.710 4.773 - 4.800: 99.4073% ( 2) 00:18:08.710 4.827 - 4.853: 99.4125% ( 1) 00:18:08.710 5.413 - 5.440: 99.4176% ( 1) 00:18:08.710 5.813 - 5.840: 99.4228% ( 1) 00:18:08.710 5.867 - 5.893: 99.4383% ( 3) 00:18:08.710 5.893 - 5.920: 99.4434% ( 1) 00:18:08.710 5.947 - 5.973: 99.4486% ( 1) 00:18:08.710 6.000 - 6.027: 99.4537% ( 1) 00:18:08.710 6.027 - 6.053: 99.4589% ( 1) 00:18:08.710 6.080 - 6.107: 99.4743% ( 3) 00:18:08.710 6.133 - 6.160: 99.4795% ( 1) 00:18:08.710 6.213 - 6.240: 99.4846% ( 1) 00:18:08.710 6.267 - 6.293: 99.4898% ( 1) 00:18:08.710 6.320 - 6.347: 99.4949% ( 1) 00:18:08.710 6.347 - 6.373: 99.5001% ( 1) 00:18:08.710 6.480 - 6.507: 99.5053% ( 1) 00:18:08.710 6.507 - 6.533: 99.5104% ( 1) 00:18:08.710 6.533 - 6.560: 99.5156% ( 1) 00:18:08.710 6.560 - 6.587: 99.5207% ( 1) 00:18:08.710 6.587 - 6.613: 99.5259% ( 1) 00:18:08.710 6.667 - 6.693: 99.5310% ( 1) 00:18:08.710 6.693 - 6.720: 99.5413% ( 2) 00:18:08.710 6.800 - 6.827: 99.5465% ( 1) 00:18:08.710 6.827 - 6.880: 99.5516% ( 1) 00:18:08.710 6.880 - 6.933: 99.5568% ( 1) 00:18:08.710 7.040 - 7.093: 99.5619% ( 1) 00:18:08.710 7.200 - 7.253: 99.5723% ( 2) 00:18:08.710 7.360 - 7.413: 99.5774% ( 1) 00:18:08.710 7.627 - 7.680: 99.5877% ( 2) 00:18:08.710 8.160 - 8.213: 99.5929% ( 1) 00:18:08.710 10.773 - 10.827: 99.5980% ( 1) 00:18:08.710 11.947 - 12.000: 99.6032% ( 1) 00:18:08.710 16.533 - 16.640: 99.6083% ( 1) 00:18:08.710 16.640 - 16.747: 99.6135% ( 1) 00:18:08.710 26.773 - 26.880: 99.6186% ( 1) 00:18:08.710 34.773 - 34.987: 99.6238% ( 1) 00:18:08.710 3986.773 - 4014.080: 100.0000% ( 73) 00:18:08.710 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.710 [ 00:18:08.710 { 00:18:08.710 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.710 "subtype": "Discovery", 00:18:08.710 "listen_addresses": [], 00:18:08.710 "allow_any_host": true, 00:18:08.710 "hosts": [] 00:18:08.710 }, 00:18:08.710 { 00:18:08.710 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.710 "subtype": "NVMe", 00:18:08.710 "listen_addresses": [ 00:18:08.710 { 00:18:08.710 "trtype": "VFIOUSER", 00:18:08.710 "adrfam": "IPv4", 00:18:08.710 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.710 "trsvcid": "0" 00:18:08.710 } 00:18:08.710 ], 00:18:08.710 "allow_any_host": true, 00:18:08.710 "hosts": [], 00:18:08.710 "serial_number": "SPDK1", 00:18:08.710 "model_number": "SPDK bdev Controller", 00:18:08.710 "max_namespaces": 32, 00:18:08.710 "min_cntlid": 1, 00:18:08.710 "max_cntlid": 65519, 00:18:08.710 "namespaces": [ 00:18:08.710 { 00:18:08.710 "nsid": 1, 00:18:08.710 "bdev_name": "Malloc1", 00:18:08.710 "name": "Malloc1", 00:18:08.710 "nguid": "CBC1A57DA90E4FB3A20D4BBE99AAAD0B", 00:18:08.710 "uuid": "cbc1a57d-a90e-4fb3-a20d-4bbe99aaad0b" 00:18:08.710 }, 00:18:08.710 { 00:18:08.710 "nsid": 2, 00:18:08.710 "bdev_name": "Malloc3", 00:18:08.710 "name": "Malloc3", 00:18:08.710 "nguid": "3684D206814A49D8B04088362A2C43AA", 00:18:08.710 "uuid": "3684d206-814a-49d8-b040-88362a2c43aa" 00:18:08.710 } 00:18:08.710 ] 00:18:08.710 }, 00:18:08.710 { 00:18:08.710 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.710 "subtype": "NVMe", 00:18:08.710 "listen_addresses": [ 00:18:08.710 { 00:18:08.710 "trtype": "VFIOUSER", 00:18:08.710 "adrfam": "IPv4", 00:18:08.710 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.710 "trsvcid": "0" 00:18:08.710 } 00:18:08.710 ], 00:18:08.710 "allow_any_host": true, 00:18:08.710 "hosts": [], 00:18:08.710 "serial_number": "SPDK2", 00:18:08.710 "model_number": "SPDK bdev Controller", 00:18:08.710 "max_namespaces": 32, 00:18:08.710 "min_cntlid": 1, 00:18:08.710 "max_cntlid": 65519, 00:18:08.710 "namespaces": [ 00:18:08.710 { 00:18:08.710 "nsid": 1, 00:18:08.710 "bdev_name": "Malloc2", 00:18:08.710 "name": "Malloc2", 00:18:08.710 "nguid": "AAEB6F5200FF4C3B8375B983DAB715AE", 00:18:08.710 "uuid": "aaeb6f52-00ff-4c3b-8375-b983dab715ae" 00:18:08.710 } 00:18:08.710 ] 00:18:08.710 } 00:18:08.710 ] 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1913935 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:08.710 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.710 Malloc4 00:18:08.710 [2024-07-22 10:34:14.391874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.710 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:08.970 [2024-07-22 10:34:14.532779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.970 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.970 Asynchronous Event Request test 00:18:08.970 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.970 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.970 Registering asynchronous event callbacks... 00:18:08.970 Starting namespace attribute notice tests for all controllers... 00:18:08.970 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:08.970 aer_cb - Changed Namespace 00:18:08.970 Cleaning up... 00:18:09.230 [ 00:18:09.230 { 00:18:09.230 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.230 "subtype": "Discovery", 00:18:09.230 "listen_addresses": [], 00:18:09.230 "allow_any_host": true, 00:18:09.230 "hosts": [] 00:18:09.230 }, 00:18:09.230 { 00:18:09.230 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.230 "subtype": "NVMe", 00:18:09.230 "listen_addresses": [ 00:18:09.230 { 00:18:09.230 "trtype": "VFIOUSER", 00:18:09.230 "adrfam": "IPv4", 00:18:09.230 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.230 "trsvcid": "0" 00:18:09.230 } 00:18:09.230 ], 00:18:09.230 "allow_any_host": true, 00:18:09.230 "hosts": [], 00:18:09.230 "serial_number": "SPDK1", 00:18:09.230 "model_number": "SPDK bdev Controller", 00:18:09.230 "max_namespaces": 32, 00:18:09.230 "min_cntlid": 1, 00:18:09.230 "max_cntlid": 65519, 00:18:09.230 "namespaces": [ 00:18:09.230 { 00:18:09.230 "nsid": 1, 00:18:09.230 "bdev_name": "Malloc1", 00:18:09.230 "name": "Malloc1", 00:18:09.230 "nguid": "CBC1A57DA90E4FB3A20D4BBE99AAAD0B", 00:18:09.230 "uuid": "cbc1a57d-a90e-4fb3-a20d-4bbe99aaad0b" 00:18:09.230 }, 00:18:09.230 { 00:18:09.230 "nsid": 2, 00:18:09.230 "bdev_name": "Malloc3", 00:18:09.230 "name": "Malloc3", 00:18:09.230 "nguid": "3684D206814A49D8B04088362A2C43AA", 00:18:09.230 "uuid": "3684d206-814a-49d8-b040-88362a2c43aa" 00:18:09.230 } 00:18:09.230 ] 00:18:09.230 }, 00:18:09.230 { 00:18:09.230 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.230 "subtype": "NVMe", 00:18:09.230 "listen_addresses": [ 00:18:09.230 { 00:18:09.230 "trtype": "VFIOUSER", 00:18:09.230 "adrfam": "IPv4", 00:18:09.230 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.230 "trsvcid": "0" 00:18:09.230 } 00:18:09.230 ], 00:18:09.230 "allow_any_host": true, 00:18:09.230 "hosts": [], 00:18:09.230 "serial_number": "SPDK2", 00:18:09.230 "model_number": "SPDK bdev Controller", 00:18:09.230 "max_namespaces": 32, 00:18:09.230 "min_cntlid": 1, 00:18:09.230 "max_cntlid": 65519, 00:18:09.230 "namespaces": [ 00:18:09.230 { 00:18:09.230 "nsid": 1, 00:18:09.230 "bdev_name": "Malloc2", 00:18:09.230 "name": "Malloc2", 00:18:09.230 "nguid": "AAEB6F5200FF4C3B8375B983DAB715AE", 00:18:09.230 "uuid": "aaeb6f52-00ff-4c3b-8375-b983dab715ae" 00:18:09.230 }, 00:18:09.230 { 00:18:09.230 "nsid": 2, 00:18:09.230 "bdev_name": "Malloc4", 00:18:09.230 "name": "Malloc4", 00:18:09.230 "nguid": "5F177AD09A9144E3B1B80D44C666094A", 00:18:09.230 "uuid": "5f177ad0-9a91-44e3-b1b8-0d44c666094a" 00:18:09.230 } 00:18:09.230 ] 00:18:09.230 } 00:18:09.230 ] 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1913935 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1904865 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1904865 ']' 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1904865 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1904865 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1904865' 00:18:09.230 killing process with pid 1904865 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1904865 00:18:09.230 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1904865 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1914022 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1914022' 00:18:09.494 Process pid: 1914022 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1914022 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1914022 ']' 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.494 10:34:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:09.494 [2024-07-22 10:34:15.004992] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:09.494 [2024-07-22 10:34:15.005959] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:18:09.494 [2024-07-22 10:34:15.006006] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.494 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.494 [2024-07-22 10:34:15.075309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.494 [2024-07-22 10:34:15.108024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.494 [2024-07-22 10:34:15.108063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.494 [2024-07-22 10:34:15.108071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.494 [2024-07-22 10:34:15.108078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.494 [2024-07-22 10:34:15.108083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.494 [2024-07-22 10:34:15.108222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.494 [2024-07-22 10:34:15.108340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.494 [2024-07-22 10:34:15.108496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.494 [2024-07-22 10:34:15.108496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.494 [2024-07-22 10:34:15.173415] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:09.494 [2024-07-22 10:34:15.173434] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:09.494 [2024-07-22 10:34:15.174416] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:09.494 [2024-07-22 10:34:15.174787] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:09.494 [2024-07-22 10:34:15.174876] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:10.136 10:34:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.136 10:34:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:18:10.136 10:34:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:11.515 10:34:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:11.515 Malloc1 00:18:11.515 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:11.774 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:12.033 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:12.033 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:12.033 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:12.033 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:12.293 Malloc2 00:18:12.293 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:12.293 10:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:12.553 10:34:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1914022 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1914022 ']' 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1914022 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1914022 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1914022' 00:18:12.813 killing process with pid 1914022 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1914022 00:18:12.813 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1914022 00:18:13.074 10:34:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:13.075 00:18:13.075 real 0m50.537s 00:18:13.075 user 3m20.380s 00:18:13.075 sys 0m3.123s 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:13.075 ************************************ 00:18:13.075 END TEST nvmf_vfio_user 00:18:13.075 ************************************ 00:18:13.075 10:34:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.075 10:34:18 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:13.075 10:34:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:13.075 10:34:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.075 10:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.075 ************************************ 00:18:13.075 START TEST nvmf_vfio_user_nvme_compliance 00:18:13.075 ************************************ 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:13.075 * Looking for test storage... 00:18:13.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1914937 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1914937' 00:18:13.075 Process pid: 1914937 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1914937 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1914937 ']' 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.075 10:34:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:13.335 [2024-07-22 10:34:18.773920] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:18:13.335 [2024-07-22 10:34:18.773987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.335 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.335 [2024-07-22 10:34:18.846673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.335 [2024-07-22 10:34:18.887611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.336 [2024-07-22 10:34:18.887652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.336 [2024-07-22 10:34:18.887660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.336 [2024-07-22 10:34:18.887666] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.336 [2024-07-22 10:34:18.887673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.336 [2024-07-22 10:34:18.887813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.336 [2024-07-22 10:34:18.887940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.336 [2024-07-22 10:34:18.887942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.909 10:34:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.909 10:34:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:18:13.909 10:34:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.897 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 malloc0 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 10:34:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:15.156 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.156 00:18:15.156 00:18:15.156 CUnit - A unit testing framework for C - Version 2.1-3 00:18:15.156 http://cunit.sourceforge.net/ 00:18:15.156 00:18:15.156 00:18:15.156 Suite: nvme_compliance 00:18:15.156 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-22 10:34:20.823825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.156 [2024-07-22 10:34:20.825170] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:15.156 [2024-07-22 10:34:20.825180] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:15.156 [2024-07-22 10:34:20.825185] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:15.156 [2024-07-22 10:34:20.826841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.415 passed 00:18:15.415 Test: admin_identify_ctrlr_verify_fused ...[2024-07-22 10:34:20.923425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.415 [2024-07-22 10:34:20.926439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.415 passed 00:18:15.415 Test: admin_identify_ns ...[2024-07-22 10:34:21.025975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.415 [2024-07-22 10:34:21.085408] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:15.415 [2024-07-22 10:34:21.093406] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:15.674 [2024-07-22 10:34:21.114518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.674 passed 00:18:15.674 Test: admin_get_features_mandatory_features ...[2024-07-22 10:34:21.208496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.674 [2024-07-22 10:34:21.211518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.674 passed 00:18:15.674 Test: admin_get_features_optional_features ...[2024-07-22 10:34:21.306031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.674 [2024-07-22 10:34:21.309044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.674 passed 00:18:15.933 Test: admin_set_features_number_of_queues ...[2024-07-22 10:34:21.402949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.933 [2024-07-22 10:34:21.507502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.933 passed 00:18:15.933 Test: admin_get_log_page_mandatory_logs ...[2024-07-22 10:34:21.601471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.933 [2024-07-22 10:34:21.604488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.193 passed 00:18:16.193 Test: admin_get_log_page_with_lpo ...[2024-07-22 10:34:21.698032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.193 [2024-07-22 10:34:21.769406] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:16.193 [2024-07-22 10:34:21.782455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.193 passed 00:18:16.193 Test: fabric_property_get ...[2024-07-22 10:34:21.876068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.193 [2024-07-22 10:34:21.877313] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:16.193 [2024-07-22 10:34:21.879085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.453 passed 00:18:16.453 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-22 10:34:21.973696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.453 [2024-07-22 10:34:21.974948] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:16.453 [2024-07-22 10:34:21.976714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.453 passed 00:18:16.453 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-22 10:34:22.070875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.712 [2024-07-22 10:34:22.154401] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:16.712 [2024-07-22 10:34:22.170403] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:16.712 [2024-07-22 10:34:22.175483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.712 passed 00:18:16.712 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-22 10:34:22.268078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.712 [2024-07-22 10:34:22.269316] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:16.712 [2024-07-22 10:34:22.271093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.712 passed 00:18:16.712 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-22 10:34:22.365251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.972 [2024-07-22 10:34:22.440408] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:16.972 [2024-07-22 10:34:22.464404] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:16.972 [2024-07-22 10:34:22.469489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.972 passed 00:18:16.972 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-22 10:34:22.564093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.972 [2024-07-22 10:34:22.565329] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:16.972 [2024-07-22 10:34:22.565348] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:16.972 [2024-07-22 10:34:22.567108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.972 passed 00:18:16.972 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-22 10:34:22.657177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.232 [2024-07-22 10:34:22.752403] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:17.232 [2024-07-22 10:34:22.760403] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:17.232 [2024-07-22 10:34:22.768399] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:17.232 [2024-07-22 10:34:22.776402] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:17.232 [2024-07-22 10:34:22.805481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.232 passed 00:18:17.232 Test: admin_create_io_sq_verify_pc ...[2024-07-22 10:34:22.895087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.232 [2024-07-22 10:34:22.910410] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:17.232 [2024-07-22 10:34:22.928241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.490 passed 00:18:17.490 Test: admin_create_io_qp_max_qps ...[2024-07-22 10:34:23.021768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.428 [2024-07-22 10:34:24.111406] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:18.997 [2024-07-22 10:34:24.507901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.997 passed 00:18:18.997 Test: admin_create_io_sq_shared_cq ...[2024-07-22 10:34:24.603064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:19.257 [2024-07-22 10:34:24.734408] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:19.257 [2024-07-22 10:34:24.766456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.257 passed 00:18:19.257 00:18:19.257 Run Summary: Type Total Ran Passed Failed Inactive 00:18:19.257 suites 1 1 n/a 0 0 00:18:19.257 tests 18 18 18 0 0 00:18:19.257 asserts 360 360 360 0 n/a 00:18:19.257 00:18:19.257 Elapsed time = 1.656 seconds 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1914937 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1914937 ']' 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1914937 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1914937 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1914937' 00:18:19.257 killing process with pid 1914937 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1914937 00:18:19.257 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1914937 00:18:19.518 10:34:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:19.518 00:18:19.518 real 0m6.403s 00:18:19.518 user 0m18.396s 00:18:19.518 sys 0m0.485s 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:19.518 ************************************ 00:18:19.518 END TEST nvmf_vfio_user_nvme_compliance 00:18:19.518 ************************************ 00:18:19.518 10:34:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:19.518 10:34:25 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:19.518 10:34:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:19.518 10:34:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.518 10:34:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.518 ************************************ 00:18:19.518 START TEST nvmf_vfio_user_fuzz 00:18:19.518 ************************************ 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:19.518 * Looking for test storage... 00:18:19.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.518 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1916140 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1916140' 00:18:19.519 Process pid: 1916140 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1916140 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1916140 ']' 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.519 10:34:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.458 10:34:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.458 10:34:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:20.458 10:34:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.396 malloc0 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.396 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:21.656 10:34:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:53.753 Fuzzing completed. Shutting down the fuzz application 00:18:53.753 00:18:53.753 Dumping successful admin opcodes: 00:18:53.753 8, 9, 10, 24, 00:18:53.753 Dumping successful io opcodes: 00:18:53.754 0, 00:18:53.754 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1119170, total successful commands: 4404, random_seed: 2685542080 00:18:53.754 NS: 0x200003a1ef00 admin qp, Total commands completed: 140796, total successful commands: 1143, random_seed: 434795840 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1916140 ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916140' 00:18:53.754 killing process with pid 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1916140 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:53.754 00:18:53.754 real 0m33.647s 00:18:53.754 user 0m37.658s 00:18:53.754 sys 0m25.713s 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.754 10:34:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 ************************************ 00:18:53.754 END TEST nvmf_vfio_user_fuzz 00:18:53.754 ************************************ 00:18:53.754 10:34:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.754 10:34:58 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:53.754 10:34:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.754 10:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.754 10:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 ************************************ 00:18:53.754 START TEST nvmf_host_management 00:18:53.754 ************************************ 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:53.754 * Looking for test storage... 00:18:53.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.754 10:34:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:01.889 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:01.889 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:01.889 Found net devices under 0000:31:00.0: cvl_0_0 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:01.889 Found net devices under 0000:31:00.1: cvl_0_1 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.889 10:35:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:19:01.889 00:19:01.889 --- 10.0.0.2 ping statistics --- 00:19:01.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.889 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:19:01.889 00:19:01.889 --- 10.0.0.1 ping statistics --- 00:19:01.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.889 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.889 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1927088 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1927088 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1927088 ']' 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.890 10:35:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:01.890 [2024-07-22 10:35:07.276649] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:01.890 [2024-07-22 10:35:07.276703] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.890 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.890 [2024-07-22 10:35:07.367762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.890 [2024-07-22 10:35:07.404578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.890 [2024-07-22 10:35:07.404624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.890 [2024-07-22 10:35:07.404632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.890 [2024-07-22 10:35:07.404639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.890 [2024-07-22 10:35:07.404645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.890 [2024-07-22 10:35:07.404759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.890 [2024-07-22 10:35:07.404916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.890 [2024-07-22 10:35:07.405072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.890 [2024-07-22 10:35:07.405073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 [2024-07-22 10:35:08.091088] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.458 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 Malloc0 00:19:02.458 [2024-07-22 10:35:08.154448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1927159 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1927159 /var/tmp/bdevperf.sock 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1927159 ']' 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:02.718 { 00:19:02.718 "params": { 00:19:02.718 "name": "Nvme$subsystem", 00:19:02.718 "trtype": "$TEST_TRANSPORT", 00:19:02.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.718 "adrfam": "ipv4", 00:19:02.718 "trsvcid": "$NVMF_PORT", 00:19:02.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.718 "hdgst": ${hdgst:-false}, 00:19:02.718 "ddgst": ${ddgst:-false} 00:19:02.718 }, 00:19:02.718 "method": "bdev_nvme_attach_controller" 00:19:02.718 } 00:19:02.718 EOF 00:19:02.718 )") 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:02.718 10:35:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.718 "params": { 00:19:02.718 "name": "Nvme0", 00:19:02.718 "trtype": "tcp", 00:19:02.718 "traddr": "10.0.0.2", 00:19:02.718 "adrfam": "ipv4", 00:19:02.718 "trsvcid": "4420", 00:19:02.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:02.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:02.718 "hdgst": false, 00:19:02.718 "ddgst": false 00:19:02.718 }, 00:19:02.718 "method": "bdev_nvme_attach_controller" 00:19:02.718 }' 00:19:02.718 [2024-07-22 10:35:08.254279] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:02.718 [2024-07-22 10:35:08.254329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927159 ] 00:19:02.718 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.718 [2024-07-22 10:35:08.322927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.718 [2024-07-22 10:35:08.354550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.978 Running I/O for 10 seconds... 00:19:03.551 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:03.551 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:03.551 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:03.551 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=584 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 584 -ge 100 ']' 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.552 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:03.552 [2024-07-22 10:35:09.101445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.101806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cf40 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.102404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.552 [2024-07-22 10:35:09.102442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.102453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.552 [2024-07-22 10:35:09.102461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.102470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.552 [2024-07-22 10:35:09.102477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.102485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.552 [2024-07-22 10:35:09.102493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.102501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235e290 is same with the state(5) to be set 00:19:03.552 [2024-07-22 10:35:09.103545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.552 [2024-07-22 10:35:09.103671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.552 [2024-07-22 10:35:09.103685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.103988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.103995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.553 [2024-07-22 10:35:09.104451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.553 [2024-07-22 10:35:09.104460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.554 [2024-07-22 10:35:09.104634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.554 [2024-07-22 10:35:09.104691] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23585b0 was disconnected and freed. reset controller. 00:19:03.554 [2024-07-22 10:35:09.105898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:03.554 task offset: 90112 on job bdev=Nvme0n1 fails 00:19:03.554 00:19:03.554 Latency(us) 00:19:03.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.554 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:03.554 Job: Nvme0n1 ended in about 0.45 seconds with error 00:19:03.554 Verification LBA range: start 0x0 length 0x400 00:19:03.554 Nvme0n1 : 0.45 1456.99 91.06 140.86 0.00 38911.14 1495.04 37573.97 00:19:03.554 =================================================================================================================== 00:19:03.554 Total : 1456.99 91.06 140.86 0.00 38911.14 1495.04 37573.97 00:19:03.554 [2024-07-22 10:35:09.107911] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:03.554 [2024-07-22 10:35:09.107934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235e290 (9): Bad file descriptor 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.554 10:35:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:19:03.554 [2024-07-22 10:35:09.170634] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1927159 00:19:04.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1927159) - No such process 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:04.492 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:04.493 { 00:19:04.493 "params": { 00:19:04.493 "name": "Nvme$subsystem", 00:19:04.493 "trtype": "$TEST_TRANSPORT", 00:19:04.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.493 "adrfam": "ipv4", 00:19:04.493 "trsvcid": "$NVMF_PORT", 00:19:04.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.493 "hdgst": ${hdgst:-false}, 00:19:04.493 "ddgst": ${ddgst:-false} 00:19:04.493 }, 00:19:04.493 "method": "bdev_nvme_attach_controller" 00:19:04.493 } 00:19:04.493 EOF 00:19:04.493 )") 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:04.493 10:35:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:04.493 "params": { 00:19:04.493 "name": "Nvme0", 00:19:04.493 "trtype": "tcp", 00:19:04.493 "traddr": "10.0.0.2", 00:19:04.493 "adrfam": "ipv4", 00:19:04.493 "trsvcid": "4420", 00:19:04.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:04.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:04.493 "hdgst": false, 00:19:04.493 "ddgst": false 00:19:04.493 }, 00:19:04.493 "method": "bdev_nvme_attach_controller" 00:19:04.493 }' 00:19:04.493 [2024-07-22 10:35:10.176022] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:04.493 [2024-07-22 10:35:10.176082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927600 ] 00:19:04.753 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.753 [2024-07-22 10:35:10.240391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.753 [2024-07-22 10:35:10.271204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.014 Running I/O for 1 seconds... 00:19:05.957 00:19:05.957 Latency(us) 00:19:05.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.957 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:05.957 Verification LBA range: start 0x0 length 0x400 00:19:05.957 Nvme0n1 : 1.02 1498.59 93.66 0.00 0.00 42003.22 9666.56 33641.81 00:19:05.957 =================================================================================================================== 00:19:05.957 Total : 1498.59 93.66 0.00 0.00 42003.22 9666.56 33641.81 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.957 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.957 rmmod nvme_tcp 00:19:05.957 rmmod nvme_fabrics 00:19:06.218 rmmod nvme_keyring 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1927088 ']' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1927088 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1927088 ']' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1927088 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1927088 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1927088' 00:19:06.218 killing process with pid 1927088 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1927088 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1927088 00:19:06.218 [2024-07-22 10:35:11.832019] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.218 10:35:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.761 10:35:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.761 10:35:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:08.761 00:19:08.761 real 0m15.133s 00:19:08.761 user 0m22.544s 00:19:08.761 sys 0m7.110s 00:19:08.761 10:35:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.761 10:35:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:08.761 ************************************ 00:19:08.761 END TEST nvmf_host_management 00:19:08.761 ************************************ 00:19:08.761 10:35:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:08.761 10:35:13 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:08.761 10:35:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:08.761 10:35:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.761 10:35:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.761 ************************************ 00:19:08.761 START TEST nvmf_lvol 00:19:08.761 ************************************ 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:08.761 * Looking for test storage... 00:19:08.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.761 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.762 10:35:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:16.898 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:16.898 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:16.898 Found net devices under 0000:31:00.0: cvl_0_0 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:16.898 Found net devices under 0000:31:00.1: cvl_0_1 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:19:16.898 00:19:16.898 --- 10.0.0.2 ping statistics --- 00:19:16.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.898 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:16.898 00:19:16.898 --- 10.0.0.1 ping statistics --- 00:19:16.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.898 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1932505 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1932505 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1932505 ']' 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.898 10:35:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:16.898 [2024-07-22 10:35:21.775977] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:16.898 [2024-07-22 10:35:21.776023] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.898 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.898 [2024-07-22 10:35:21.852421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.899 [2024-07-22 10:35:21.883793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.899 [2024-07-22 10:35:21.883828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.899 [2024-07-22 10:35:21.883836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.899 [2024-07-22 10:35:21.883842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.899 [2024-07-22 10:35:21.883847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.899 [2024-07-22 10:35:21.883980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.899 [2024-07-22 10:35:21.884093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.899 [2024-07-22 10:35:21.884095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.899 10:35:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:17.159 [2024-07-22 10:35:22.725639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.159 10:35:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.418 10:35:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:17.418 10:35:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.677 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:17.677 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:17.677 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:17.985 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=82fb9b51-4b8e-41f9-8d5d-74d1796a4ddc 00:19:17.985 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82fb9b51-4b8e-41f9-8d5d-74d1796a4ddc lvol 20 00:19:17.985 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a3896c37-e8f7-4fb5-8ed0-4a05c4813d70 00:19:17.985 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:18.303 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3896c37-e8f7-4fb5-8ed0-4a05c4813d70 00:19:18.303 10:35:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:18.562 [2024-07-22 10:35:24.111234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.562 10:35:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:18.821 10:35:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1933061 00:19:18.821 10:35:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:18.821 10:35:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:18.821 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.758 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a3896c37-e8f7-4fb5-8ed0-4a05c4813d70 MY_SNAPSHOT 00:19:20.017 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2eb09b45-71d5-4808-8c03-3bf220c62b34 00:19:20.017 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a3896c37-e8f7-4fb5-8ed0-4a05c4813d70 30 00:19:20.277 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2eb09b45-71d5-4808-8c03-3bf220c62b34 MY_CLONE 00:19:20.277 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8d0af17b-4f56-4db5-8eeb-ef7781f5f696 00:19:20.277 10:35:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8d0af17b-4f56-4db5-8eeb-ef7781f5f696 00:19:20.847 10:35:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1933061 00:19:28.991 Initializing NVMe Controllers 00:19:28.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:28.991 Controller IO queue size 128, less than required. 00:19:28.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:28.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:28.991 Initialization complete. Launching workers. 00:19:28.991 ======================================================== 00:19:28.991 Latency(us) 00:19:28.991 Device Information : IOPS MiB/s Average min max 00:19:28.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12666.00 49.48 10111.12 1528.83 59306.08 00:19:28.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17817.00 69.60 7184.78 526.36 37787.40 00:19:28.991 ======================================================== 00:19:28.991 Total : 30483.00 119.07 8400.70 526.36 59306.08 00:19:28.991 00:19:28.991 10:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:29.250 10:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3896c37-e8f7-4fb5-8ed0-4a05c4813d70 00:19:29.510 10:35:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82fb9b51-4b8e-41f9-8d5d-74d1796a4ddc 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.510 rmmod nvme_tcp 00:19:29.510 rmmod nvme_fabrics 00:19:29.510 rmmod nvme_keyring 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1932505 ']' 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1932505 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1932505 ']' 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1932505 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:19:29.510 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932505 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932505' 00:19:29.769 killing process with pid 1932505 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1932505 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1932505 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.769 10:35:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.310 00:19:32.310 real 0m23.471s 00:19:32.310 user 1m3.004s 00:19:32.310 sys 0m8.376s 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 ************************************ 00:19:32.310 END TEST nvmf_lvol 00:19:32.310 ************************************ 00:19:32.310 10:35:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:32.310 10:35:37 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:32.310 10:35:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.310 10:35:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.310 10:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 ************************************ 00:19:32.310 START TEST nvmf_lvs_grow 00:19:32.310 ************************************ 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:32.310 * Looking for test storage... 00:19:32.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.310 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.311 10:35:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:40.442 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:40.442 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:40.442 Found net devices under 0000:31:00.0: cvl_0_0 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:40.442 Found net devices under 0000:31:00.1: cvl_0_1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:19:40.442 00:19:40.442 --- 10.0.0.2 ping statistics --- 00:19:40.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.442 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:19:40.442 00:19:40.442 --- 10.0.0.1 ping statistics --- 00:19:40.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.442 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:40.442 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1939893 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1939893 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1939893 ']' 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.443 10:35:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:40.443 [2024-07-22 10:35:45.730611] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:40.443 [2024-07-22 10:35:45.730662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.443 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.443 [2024-07-22 10:35:45.802610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.443 [2024-07-22 10:35:45.833942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.443 [2024-07-22 10:35:45.833977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.443 [2024-07-22 10:35:45.833985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.443 [2024-07-22 10:35:45.833991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.443 [2024-07-22 10:35:45.833997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.443 [2024-07-22 10:35:45.834015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:41.013 [2024-07-22 10:35:46.646573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:41.013 ************************************ 00:19:41.013 START TEST lvs_grow_clean 00:19:41.013 ************************************ 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:41.013 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:41.273 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:41.273 10:35:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:41.532 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 lvol 150 00:19:41.792 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=67c9b70c-df70-4cce-8713-6b6da2dfd83c 00:19:41.792 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:41.792 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:42.053 [2024-07-22 10:35:47.512898] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:42.053 [2024-07-22 10:35:47.512950] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:42.053 true 00:19:42.053 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:42.053 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:42.053 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:42.053 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:42.314 10:35:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67c9b70c-df70-4cce-8713-6b6da2dfd83c 00:19:42.574 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:42.574 [2024-07-22 10:35:48.154964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.574 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1940389 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1940389 /var/tmp/bdevperf.sock 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1940389 ']' 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.834 10:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 [2024-07-22 10:35:48.374066] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:42.834 [2024-07-22 10:35:48.374117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940389 ] 00:19:42.834 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.834 [2024-07-22 10:35:48.455541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.834 [2024-07-22 10:35:48.486639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.774 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.774 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:19:43.774 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:43.774 Nvme0n1 00:19:44.033 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:44.033 [ 00:19:44.033 { 00:19:44.033 "name": "Nvme0n1", 00:19:44.033 "aliases": [ 00:19:44.033 "67c9b70c-df70-4cce-8713-6b6da2dfd83c" 00:19:44.033 ], 00:19:44.033 "product_name": "NVMe disk", 00:19:44.033 "block_size": 4096, 00:19:44.033 "num_blocks": 38912, 00:19:44.033 "uuid": "67c9b70c-df70-4cce-8713-6b6da2dfd83c", 00:19:44.033 "assigned_rate_limits": { 00:19:44.033 "rw_ios_per_sec": 0, 00:19:44.033 "rw_mbytes_per_sec": 0, 00:19:44.033 "r_mbytes_per_sec": 0, 00:19:44.033 "w_mbytes_per_sec": 0 00:19:44.033 }, 00:19:44.033 "claimed": false, 00:19:44.033 "zoned": false, 00:19:44.033 "supported_io_types": { 00:19:44.033 "read": true, 00:19:44.033 "write": true, 00:19:44.033 "unmap": true, 00:19:44.033 "flush": true, 00:19:44.033 "reset": true, 00:19:44.033 "nvme_admin": true, 00:19:44.033 "nvme_io": true, 00:19:44.033 "nvme_io_md": false, 00:19:44.033 "write_zeroes": true, 00:19:44.033 "zcopy": false, 00:19:44.033 "get_zone_info": false, 00:19:44.033 "zone_management": false, 00:19:44.033 "zone_append": false, 00:19:44.033 "compare": true, 00:19:44.033 "compare_and_write": true, 00:19:44.033 "abort": true, 00:19:44.033 "seek_hole": false, 00:19:44.033 "seek_data": false, 00:19:44.033 "copy": true, 00:19:44.033 "nvme_iov_md": false 00:19:44.033 }, 00:19:44.033 "memory_domains": [ 00:19:44.033 { 00:19:44.033 "dma_device_id": "system", 00:19:44.033 "dma_device_type": 1 00:19:44.033 } 00:19:44.033 ], 00:19:44.033 "driver_specific": { 00:19:44.033 "nvme": [ 00:19:44.033 { 00:19:44.033 "trid": { 00:19:44.033 "trtype": "TCP", 00:19:44.033 "adrfam": "IPv4", 00:19:44.033 "traddr": "10.0.0.2", 00:19:44.033 "trsvcid": "4420", 00:19:44.033 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:44.033 }, 00:19:44.033 "ctrlr_data": { 00:19:44.033 "cntlid": 1, 00:19:44.033 "vendor_id": "0x8086", 00:19:44.033 "model_number": "SPDK bdev Controller", 00:19:44.033 "serial_number": "SPDK0", 00:19:44.033 "firmware_revision": "24.09", 00:19:44.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:44.033 "oacs": { 00:19:44.033 "security": 0, 00:19:44.033 "format": 0, 00:19:44.033 "firmware": 0, 00:19:44.033 "ns_manage": 0 00:19:44.033 }, 00:19:44.033 "multi_ctrlr": true, 00:19:44.033 "ana_reporting": false 00:19:44.033 }, 00:19:44.033 "vs": { 00:19:44.033 "nvme_version": "1.3" 00:19:44.033 }, 00:19:44.033 "ns_data": { 00:19:44.033 "id": 1, 00:19:44.033 "can_share": true 00:19:44.033 } 00:19:44.033 } 00:19:44.033 ], 00:19:44.033 "mp_policy": "active_passive" 00:19:44.033 } 00:19:44.033 } 00:19:44.033 ] 00:19:44.033 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1940620 00:19:44.033 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:44.033 10:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.033 Running I/O for 10 seconds... 00:19:45.414 Latency(us) 00:19:45.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.414 Nvme0n1 : 1.00 18061.00 70.55 0.00 0.00 0.00 0.00 0.00 00:19:45.415 =================================================================================================================== 00:19:45.415 Total : 18061.00 70.55 0.00 0.00 0.00 0.00 0.00 00:19:45.415 00:19:45.986 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:46.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.246 Nvme0n1 : 2.00 18212.00 71.14 0.00 0.00 0.00 0.00 0.00 00:19:46.246 =================================================================================================================== 00:19:46.246 Total : 18212.00 71.14 0.00 0.00 0.00 0.00 0.00 00:19:46.246 00:19:46.246 true 00:19:46.246 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:46.246 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:46.505 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:46.505 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:46.505 10:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1940620 00:19:47.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.074 Nvme0n1 : 3.00 18256.00 71.31 0.00 0.00 0.00 0.00 0.00 00:19:47.074 =================================================================================================================== 00:19:47.074 Total : 18256.00 71.31 0.00 0.00 0.00 0.00 0.00 00:19:47.074 00:19:48.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:48.453 Nvme0n1 : 4.00 18279.75 71.41 0.00 0.00 0.00 0.00 0.00 00:19:48.453 =================================================================================================================== 00:19:48.453 Total : 18279.75 71.41 0.00 0.00 0.00 0.00 0.00 00:19:48.454 00:19:49.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.390 Nvme0n1 : 5.00 18304.80 71.50 0.00 0.00 0.00 0.00 0.00 00:19:49.390 =================================================================================================================== 00:19:49.390 Total : 18304.80 71.50 0.00 0.00 0.00 0.00 0.00 00:19:49.390 00:19:50.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:50.327 Nvme0n1 : 6.00 18314.17 71.54 0.00 0.00 0.00 0.00 0.00 00:19:50.327 =================================================================================================================== 00:19:50.327 Total : 18314.17 71.54 0.00 0.00 0.00 0.00 0.00 00:19:50.327 00:19:51.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.263 Nvme0n1 : 7.00 18336.57 71.63 0.00 0.00 0.00 0.00 0.00 00:19:51.263 =================================================================================================================== 00:19:51.263 Total : 18336.57 71.63 0.00 0.00 0.00 0.00 0.00 00:19:51.263 00:19:52.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:52.201 Nvme0n1 : 8.00 18354.88 71.70 0.00 0.00 0.00 0.00 0.00 00:19:52.201 =================================================================================================================== 00:19:52.201 Total : 18354.88 71.70 0.00 0.00 0.00 0.00 0.00 00:19:52.201 00:19:53.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.137 Nvme0n1 : 9.00 18360.78 71.72 0.00 0.00 0.00 0.00 0.00 00:19:53.137 =================================================================================================================== 00:19:53.137 Total : 18360.78 71.72 0.00 0.00 0.00 0.00 0.00 00:19:53.137 00:19:54.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.075 Nvme0n1 : 10.00 18372.70 71.77 0.00 0.00 0.00 0.00 0.00 00:19:54.075 =================================================================================================================== 00:19:54.075 Total : 18372.70 71.77 0.00 0.00 0.00 0.00 0.00 00:19:54.075 00:19:54.075 00:19:54.075 Latency(us) 00:19:54.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.075 Nvme0n1 : 10.01 18373.47 71.77 0.00 0.00 6962.92 4205.23 12342.61 00:19:54.075 =================================================================================================================== 00:19:54.075 Total : 18373.47 71.77 0.00 0.00 6962.92 4205.23 12342.61 00:19:54.075 0 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1940389 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1940389 ']' 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1940389 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.075 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940389 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940389' 00:19:54.335 killing process with pid 1940389 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1940389 00:19:54.335 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.335 00:19:54.335 Latency(us) 00:19:54.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.335 =================================================================================================================== 00:19:54.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1940389 00:19:54.335 10:35:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:54.595 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:54.854 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:54.854 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:19:54.854 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:19:54.854 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:19:54.854 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:55.114 [2024-07-22 10:36:00.624584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.114 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:55.115 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:55.375 request: 00:19:55.375 { 00:19:55.375 "uuid": "1e71162a-d357-4fa6-b2ac-761cda7bdbf6", 00:19:55.375 "method": "bdev_lvol_get_lvstores", 00:19:55.375 "req_id": 1 00:19:55.375 } 00:19:55.375 Got JSON-RPC error response 00:19:55.375 response: 00:19:55.375 { 00:19:55.375 "code": -19, 00:19:55.375 "message": "No such device" 00:19:55.375 } 00:19:55.375 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:19:55.375 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.375 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.375 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.375 10:36:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:55.375 aio_bdev 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67c9b70c-df70-4cce-8713-6b6da2dfd83c 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=67c9b70c-df70-4cce-8713-6b6da2dfd83c 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:55.375 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:55.635 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67c9b70c-df70-4cce-8713-6b6da2dfd83c -t 2000 00:19:55.635 [ 00:19:55.635 { 00:19:55.635 "name": "67c9b70c-df70-4cce-8713-6b6da2dfd83c", 00:19:55.635 "aliases": [ 00:19:55.635 "lvs/lvol" 00:19:55.635 ], 00:19:55.635 "product_name": "Logical Volume", 00:19:55.635 "block_size": 4096, 00:19:55.635 "num_blocks": 38912, 00:19:55.635 "uuid": "67c9b70c-df70-4cce-8713-6b6da2dfd83c", 00:19:55.635 "assigned_rate_limits": { 00:19:55.635 "rw_ios_per_sec": 0, 00:19:55.635 "rw_mbytes_per_sec": 0, 00:19:55.635 "r_mbytes_per_sec": 0, 00:19:55.635 "w_mbytes_per_sec": 0 00:19:55.635 }, 00:19:55.635 "claimed": false, 00:19:55.635 "zoned": false, 00:19:55.635 "supported_io_types": { 00:19:55.635 "read": true, 00:19:55.635 "write": true, 00:19:55.635 "unmap": true, 00:19:55.635 "flush": false, 00:19:55.635 "reset": true, 00:19:55.635 "nvme_admin": false, 00:19:55.635 "nvme_io": false, 00:19:55.635 "nvme_io_md": false, 00:19:55.635 "write_zeroes": true, 00:19:55.635 "zcopy": false, 00:19:55.635 "get_zone_info": false, 00:19:55.635 "zone_management": false, 00:19:55.635 "zone_append": false, 00:19:55.635 "compare": false, 00:19:55.635 "compare_and_write": false, 00:19:55.635 "abort": false, 00:19:55.635 "seek_hole": true, 00:19:55.635 "seek_data": true, 00:19:55.635 "copy": false, 00:19:55.635 "nvme_iov_md": false 00:19:55.635 }, 00:19:55.635 "driver_specific": { 00:19:55.635 "lvol": { 00:19:55.635 "lvol_store_uuid": "1e71162a-d357-4fa6-b2ac-761cda7bdbf6", 00:19:55.635 "base_bdev": "aio_bdev", 00:19:55.635 "thin_provision": false, 00:19:55.635 "num_allocated_clusters": 38, 00:19:55.635 "snapshot": false, 00:19:55.635 "clone": false, 00:19:55.635 "esnap_clone": false 00:19:55.635 } 00:19:55.635 } 00:19:55.635 } 00:19:55.635 ] 00:19:55.635 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:19:55.635 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:55.635 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:55.895 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:55.895 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:55.895 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:56.155 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:56.155 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67c9b70c-df70-4cce-8713-6b6da2dfd83c 00:19:56.155 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e71162a-d357-4fa6-b2ac-761cda7bdbf6 00:19:56.414 10:36:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:56.414 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:56.675 00:19:56.675 real 0m15.443s 00:19:56.675 user 0m15.091s 00:19:56.675 sys 0m1.281s 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:56.675 ************************************ 00:19:56.675 END TEST lvs_grow_clean 00:19:56.675 ************************************ 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:56.675 ************************************ 00:19:56.675 START TEST lvs_grow_dirty 00:19:56.675 ************************************ 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:56.675 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:56.936 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:56.936 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:56.936 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a79814b1-32db-4938-b9f0-32dbc84f263f 00:19:56.936 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:19:56.936 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:57.196 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:57.196 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:57.196 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a79814b1-32db-4938-b9f0-32dbc84f263f lvol 150 00:19:57.458 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dd2b82af-9ab0-4d2e-a04e-db161334d142 00:19:57.458 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:57.458 10:36:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:57.458 [2024-07-22 10:36:03.062510] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:57.458 [2024-07-22 10:36:03.062561] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:57.458 true 00:19:57.458 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:19:57.458 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:57.723 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:57.723 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:57.723 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd2b82af-9ab0-4d2e-a04e-db161334d142 00:19:57.983 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.243 [2024-07-22 10:36:03.708514] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1943554 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1943554 /var/tmp/bdevperf.sock 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1943554 ']' 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.243 10:36:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:58.243 [2024-07-22 10:36:03.922863] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:19:58.243 [2024-07-22 10:36:03.922913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943554 ] 00:19:58.503 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.503 [2024-07-22 10:36:04.003169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.503 [2024-07-22 10:36:04.032035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.073 10:36:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.073 10:36:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:19:59.073 10:36:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:59.333 Nvme0n1 00:19:59.593 10:36:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:59.593 [ 00:19:59.593 { 00:19:59.593 "name": "Nvme0n1", 00:19:59.593 "aliases": [ 00:19:59.593 "dd2b82af-9ab0-4d2e-a04e-db161334d142" 00:19:59.593 ], 00:19:59.593 "product_name": "NVMe disk", 00:19:59.593 "block_size": 4096, 00:19:59.593 "num_blocks": 38912, 00:19:59.593 "uuid": "dd2b82af-9ab0-4d2e-a04e-db161334d142", 00:19:59.593 "assigned_rate_limits": { 00:19:59.593 "rw_ios_per_sec": 0, 00:19:59.593 "rw_mbytes_per_sec": 0, 00:19:59.593 "r_mbytes_per_sec": 0, 00:19:59.593 "w_mbytes_per_sec": 0 00:19:59.593 }, 00:19:59.593 "claimed": false, 00:19:59.593 "zoned": false, 00:19:59.593 "supported_io_types": { 00:19:59.593 "read": true, 00:19:59.593 "write": true, 00:19:59.593 "unmap": true, 00:19:59.593 "flush": true, 00:19:59.593 "reset": true, 00:19:59.593 "nvme_admin": true, 00:19:59.593 "nvme_io": true, 00:19:59.593 "nvme_io_md": false, 00:19:59.593 "write_zeroes": true, 00:19:59.593 "zcopy": false, 00:19:59.593 "get_zone_info": false, 00:19:59.593 "zone_management": false, 00:19:59.593 "zone_append": false, 00:19:59.593 "compare": true, 00:19:59.593 "compare_and_write": true, 00:19:59.593 "abort": true, 00:19:59.593 "seek_hole": false, 00:19:59.593 "seek_data": false, 00:19:59.593 "copy": true, 00:19:59.593 "nvme_iov_md": false 00:19:59.593 }, 00:19:59.593 "memory_domains": [ 00:19:59.593 { 00:19:59.593 "dma_device_id": "system", 00:19:59.593 "dma_device_type": 1 00:19:59.593 } 00:19:59.593 ], 00:19:59.593 "driver_specific": { 00:19:59.593 "nvme": [ 00:19:59.593 { 00:19:59.593 "trid": { 00:19:59.593 "trtype": "TCP", 00:19:59.593 "adrfam": "IPv4", 00:19:59.593 "traddr": "10.0.0.2", 00:19:59.593 "trsvcid": "4420", 00:19:59.593 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:59.593 }, 00:19:59.593 "ctrlr_data": { 00:19:59.593 "cntlid": 1, 00:19:59.593 "vendor_id": "0x8086", 00:19:59.593 "model_number": "SPDK bdev Controller", 00:19:59.593 "serial_number": "SPDK0", 00:19:59.593 "firmware_revision": "24.09", 00:19:59.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.593 "oacs": { 00:19:59.593 "security": 0, 00:19:59.593 "format": 0, 00:19:59.593 "firmware": 0, 00:19:59.593 "ns_manage": 0 00:19:59.593 }, 00:19:59.593 "multi_ctrlr": true, 00:19:59.593 "ana_reporting": false 00:19:59.593 }, 00:19:59.593 "vs": { 00:19:59.593 "nvme_version": "1.3" 00:19:59.593 }, 00:19:59.593 "ns_data": { 00:19:59.593 "id": 1, 00:19:59.593 "can_share": true 00:19:59.593 } 00:19:59.593 } 00:19:59.593 ], 00:19:59.593 "mp_policy": "active_passive" 00:19:59.593 } 00:19:59.593 } 00:19:59.593 ] 00:19:59.593 10:36:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1943799 00:19:59.593 10:36:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:59.593 10:36:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.853 Running I/O for 10 seconds... 00:20:00.896 Latency(us) 00:20:00.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:00.896 Nvme0n1 : 1.00 17710.00 69.18 0.00 0.00 0.00 0.00 0.00 00:20:00.896 =================================================================================================================== 00:20:00.896 Total : 17710.00 69.18 0.00 0.00 0.00 0.00 0.00 00:20:00.896 00:20:01.832 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:01.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.832 Nvme0n1 : 2.00 17808.00 69.56 0.00 0.00 0.00 0.00 0.00 00:20:01.832 =================================================================================================================== 00:20:01.832 Total : 17808.00 69.56 0.00 0.00 0.00 0.00 0.00 00:20:01.832 00:20:01.832 true 00:20:01.832 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:01.832 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:02.092 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:02.092 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:02.092 10:36:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1943799 00:20:02.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.663 Nvme0n1 : 3.00 17883.33 69.86 0.00 0.00 0.00 0.00 0.00 00:20:02.663 =================================================================================================================== 00:20:02.663 Total : 17883.33 69.86 0.00 0.00 0.00 0.00 0.00 00:20:02.663 00:20:04.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:04.042 Nvme0n1 : 4.00 17917.25 69.99 0.00 0.00 0.00 0.00 0.00 00:20:04.042 =================================================================================================================== 00:20:04.042 Total : 17917.25 69.99 0.00 0.00 0.00 0.00 0.00 00:20:04.042 00:20:04.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:04.612 Nvme0n1 : 5.00 17939.20 70.08 0.00 0.00 0.00 0.00 0.00 00:20:04.612 =================================================================================================================== 00:20:04.612 Total : 17939.20 70.08 0.00 0.00 0.00 0.00 0.00 00:20:04.612 00:20:05.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:05.997 Nvme0n1 : 6.00 17955.50 70.14 0.00 0.00 0.00 0.00 0.00 00:20:05.997 =================================================================================================================== 00:20:05.997 Total : 17955.50 70.14 0.00 0.00 0.00 0.00 0.00 00:20:05.997 00:20:06.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.941 Nvme0n1 : 7.00 17974.57 70.21 0.00 0.00 0.00 0.00 0.00 00:20:06.941 =================================================================================================================== 00:20:06.941 Total : 17974.57 70.21 0.00 0.00 0.00 0.00 0.00 00:20:06.941 00:20:07.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.880 Nvme0n1 : 8.00 17989.62 70.27 0.00 0.00 0.00 0.00 0.00 00:20:07.880 =================================================================================================================== 00:20:07.880 Total : 17989.62 70.27 0.00 0.00 0.00 0.00 0.00 00:20:07.880 00:20:08.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.831 Nvme0n1 : 9.00 18002.67 70.32 0.00 0.00 0.00 0.00 0.00 00:20:08.831 =================================================================================================================== 00:20:08.831 Total : 18002.67 70.32 0.00 0.00 0.00 0.00 0.00 00:20:08.831 00:20:09.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.769 Nvme0n1 : 10.00 18011.90 70.36 0.00 0.00 0.00 0.00 0.00 00:20:09.769 =================================================================================================================== 00:20:09.769 Total : 18011.90 70.36 0.00 0.00 0.00 0.00 0.00 00:20:09.769 00:20:09.769 00:20:09.769 Latency(us) 00:20:09.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.769 Nvme0n1 : 10.01 18012.78 70.36 0.00 0.00 7102.62 4232.53 15182.51 00:20:09.769 =================================================================================================================== 00:20:09.769 Total : 18012.78 70.36 0.00 0.00 7102.62 4232.53 15182.51 00:20:09.769 0 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1943554 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1943554 ']' 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1943554 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943554 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943554' 00:20:09.769 killing process with pid 1943554 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1943554 00:20:09.769 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.769 00:20:09.769 Latency(us) 00:20:09.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.769 =================================================================================================================== 00:20:09.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.769 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1943554 00:20:10.029 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:10.029 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:10.289 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:10.289 10:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1939893 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1939893 00:20:10.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1939893 Killed "${NVMF_APP[@]}" "$@" 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1946424 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1946424 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1946424 ']' 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.549 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:10.549 [2024-07-22 10:36:16.094984] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:10.549 [2024-07-22 10:36:16.095039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.549 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.549 [2024-07-22 10:36:16.166895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.549 [2024-07-22 10:36:16.198268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.550 [2024-07-22 10:36:16.198305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.550 [2024-07-22 10:36:16.198313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.550 [2024-07-22 10:36:16.198319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.550 [2024-07-22 10:36:16.198325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.550 [2024-07-22 10:36:16.198347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.491 10:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:11.492 [2024-07-22 10:36:17.033159] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:11.492 [2024-07-22 10:36:17.033241] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:11.492 [2024-07-22 10:36:17.033270] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dd2b82af-9ab0-4d2e-a04e-db161334d142 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=dd2b82af-9ab0-4d2e-a04e-db161334d142 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:11.492 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:11.752 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dd2b82af-9ab0-4d2e-a04e-db161334d142 -t 2000 00:20:11.752 [ 00:20:11.752 { 00:20:11.752 "name": "dd2b82af-9ab0-4d2e-a04e-db161334d142", 00:20:11.752 "aliases": [ 00:20:11.752 "lvs/lvol" 00:20:11.752 ], 00:20:11.752 "product_name": "Logical Volume", 00:20:11.752 "block_size": 4096, 00:20:11.752 "num_blocks": 38912, 00:20:11.752 "uuid": "dd2b82af-9ab0-4d2e-a04e-db161334d142", 00:20:11.752 "assigned_rate_limits": { 00:20:11.752 "rw_ios_per_sec": 0, 00:20:11.752 "rw_mbytes_per_sec": 0, 00:20:11.752 "r_mbytes_per_sec": 0, 00:20:11.752 "w_mbytes_per_sec": 0 00:20:11.752 }, 00:20:11.752 "claimed": false, 00:20:11.752 "zoned": false, 00:20:11.752 "supported_io_types": { 00:20:11.752 "read": true, 00:20:11.752 "write": true, 00:20:11.752 "unmap": true, 00:20:11.752 "flush": false, 00:20:11.752 "reset": true, 00:20:11.752 "nvme_admin": false, 00:20:11.752 "nvme_io": false, 00:20:11.752 "nvme_io_md": false, 00:20:11.752 "write_zeroes": true, 00:20:11.752 "zcopy": false, 00:20:11.752 "get_zone_info": false, 00:20:11.752 "zone_management": false, 00:20:11.752 "zone_append": false, 00:20:11.752 "compare": false, 00:20:11.752 "compare_and_write": false, 00:20:11.752 "abort": false, 00:20:11.752 "seek_hole": true, 00:20:11.752 "seek_data": true, 00:20:11.752 "copy": false, 00:20:11.752 "nvme_iov_md": false 00:20:11.752 }, 00:20:11.752 "driver_specific": { 00:20:11.752 "lvol": { 00:20:11.752 "lvol_store_uuid": "a79814b1-32db-4938-b9f0-32dbc84f263f", 00:20:11.752 "base_bdev": "aio_bdev", 00:20:11.752 "thin_provision": false, 00:20:11.752 "num_allocated_clusters": 38, 00:20:11.752 "snapshot": false, 00:20:11.752 "clone": false, 00:20:11.752 "esnap_clone": false 00:20:11.752 } 00:20:11.752 } 00:20:11.752 } 00:20:11.752 ] 00:20:11.752 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:11.752 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:11.752 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:12.013 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:12.013 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:12.013 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:12.013 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:12.013 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:12.273 [2024-07-22 10:36:17.793268] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:12.273 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:12.533 request: 00:20:12.533 { 00:20:12.533 "uuid": "a79814b1-32db-4938-b9f0-32dbc84f263f", 00:20:12.533 "method": "bdev_lvol_get_lvstores", 00:20:12.533 "req_id": 1 00:20:12.533 } 00:20:12.533 Got JSON-RPC error response 00:20:12.533 response: 00:20:12.533 { 00:20:12.533 "code": -19, 00:20:12.533 "message": "No such device" 00:20:12.533 } 00:20:12.533 10:36:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:20:12.533 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.533 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.533 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:12.534 aio_bdev 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dd2b82af-9ab0-4d2e-a04e-db161334d142 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=dd2b82af-9ab0-4d2e-a04e-db161334d142 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:12.534 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:12.809 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dd2b82af-9ab0-4d2e-a04e-db161334d142 -t 2000 00:20:12.809 [ 00:20:12.809 { 00:20:12.809 "name": "dd2b82af-9ab0-4d2e-a04e-db161334d142", 00:20:12.809 "aliases": [ 00:20:12.809 "lvs/lvol" 00:20:12.809 ], 00:20:12.809 "product_name": "Logical Volume", 00:20:12.809 "block_size": 4096, 00:20:12.809 "num_blocks": 38912, 00:20:12.809 "uuid": "dd2b82af-9ab0-4d2e-a04e-db161334d142", 00:20:12.809 "assigned_rate_limits": { 00:20:12.809 "rw_ios_per_sec": 0, 00:20:12.809 "rw_mbytes_per_sec": 0, 00:20:12.809 "r_mbytes_per_sec": 0, 00:20:12.809 "w_mbytes_per_sec": 0 00:20:12.809 }, 00:20:12.809 "claimed": false, 00:20:12.809 "zoned": false, 00:20:12.809 "supported_io_types": { 00:20:12.809 "read": true, 00:20:12.809 "write": true, 00:20:12.809 "unmap": true, 00:20:12.809 "flush": false, 00:20:12.809 "reset": true, 00:20:12.809 "nvme_admin": false, 00:20:12.809 "nvme_io": false, 00:20:12.809 "nvme_io_md": false, 00:20:12.809 "write_zeroes": true, 00:20:12.809 "zcopy": false, 00:20:12.809 "get_zone_info": false, 00:20:12.809 "zone_management": false, 00:20:12.809 "zone_append": false, 00:20:12.809 "compare": false, 00:20:12.809 "compare_and_write": false, 00:20:12.809 "abort": false, 00:20:12.809 "seek_hole": true, 00:20:12.809 "seek_data": true, 00:20:12.809 "copy": false, 00:20:12.809 "nvme_iov_md": false 00:20:12.809 }, 00:20:12.809 "driver_specific": { 00:20:12.809 "lvol": { 00:20:12.810 "lvol_store_uuid": "a79814b1-32db-4938-b9f0-32dbc84f263f", 00:20:12.810 "base_bdev": "aio_bdev", 00:20:12.810 "thin_provision": false, 00:20:12.810 "num_allocated_clusters": 38, 00:20:12.810 "snapshot": false, 00:20:12.810 "clone": false, 00:20:12.810 "esnap_clone": false 00:20:12.810 } 00:20:12.810 } 00:20:12.810 } 00:20:12.810 ] 00:20:12.810 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:12.810 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:12.810 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:13.072 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:13.072 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:13.072 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:13.331 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:13.331 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dd2b82af-9ab0-4d2e-a04e-db161334d142 00:20:13.331 10:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a79814b1-32db-4938-b9f0-32dbc84f263f 00:20:13.591 10:36:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:13.591 10:36:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:13.591 00:20:13.591 real 0m17.079s 00:20:13.591 user 0m44.661s 00:20:13.591 sys 0m2.877s 00:20:13.591 10:36:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.591 10:36:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:13.591 ************************************ 00:20:13.591 END TEST lvs_grow_dirty 00:20:13.591 ************************************ 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.851 nvmf_trace.0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.851 rmmod nvme_tcp 00:20:13.851 rmmod nvme_fabrics 00:20:13.851 rmmod nvme_keyring 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1946424 ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1946424 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1946424 ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1946424 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1946424 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1946424' 00:20:13.851 killing process with pid 1946424 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1946424 00:20:13.851 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1946424 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.111 10:36:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.023 10:36:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.023 00:20:16.023 real 0m44.161s 00:20:16.023 user 1m5.922s 00:20:16.023 sys 0m10.432s 00:20:16.023 10:36:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.023 10:36:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:16.023 ************************************ 00:20:16.023 END TEST nvmf_lvs_grow 00:20:16.023 ************************************ 00:20:16.283 10:36:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:16.283 10:36:21 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:16.283 10:36:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:16.283 10:36:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.283 10:36:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.283 ************************************ 00:20:16.283 START TEST nvmf_bdev_io_wait 00:20:16.283 ************************************ 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:16.283 * Looking for test storage... 00:20:16.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.283 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.284 10:36:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:20:24.414 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:24.415 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:24.415 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:24.415 Found net devices under 0000:31:00.0: cvl_0_0 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:24.415 Found net devices under 0000:31:00.1: cvl_0_1 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.415 10:36:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.415 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.415 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.415 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.415 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:20:24.677 00:20:24.677 --- 10.0.0.2 ping statistics --- 00:20:24.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.677 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:20:24.677 00:20:24.677 --- 10.0.0.1 ping statistics --- 00:20:24.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.677 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1951894 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1951894 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1951894 ']' 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.677 10:36:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:24.677 [2024-07-22 10:36:30.311225] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:24.677 [2024-07-22 10:36:30.311289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.677 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.937 [2024-07-22 10:36:30.394220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.937 [2024-07-22 10:36:30.432735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.937 [2024-07-22 10:36:30.432776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.937 [2024-07-22 10:36:30.432784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.937 [2024-07-22 10:36:30.432790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.937 [2024-07-22 10:36:30.432796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.937 [2024-07-22 10:36:30.432939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.937 [2024-07-22 10:36:30.433053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.937 [2024-07-22 10:36:30.433214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.937 [2024-07-22 10:36:30.433215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.505 [2024-07-22 10:36:31.193279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.505 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 Malloc0 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 [2024-07-22 10:36:31.260332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1952028 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1952030 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.766 { 00:20:25.766 "params": { 00:20:25.766 "name": "Nvme$subsystem", 00:20:25.766 "trtype": "$TEST_TRANSPORT", 00:20:25.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.766 "adrfam": "ipv4", 00:20:25.766 "trsvcid": "$NVMF_PORT", 00:20:25.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.766 "hdgst": ${hdgst:-false}, 00:20:25.766 "ddgst": ${ddgst:-false} 00:20:25.766 }, 00:20:25.766 "method": "bdev_nvme_attach_controller" 00:20:25.766 } 00:20:25.766 EOF 00:20:25.766 )") 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1952032 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1952035 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.766 { 00:20:25.766 "params": { 00:20:25.766 "name": "Nvme$subsystem", 00:20:25.766 "trtype": "$TEST_TRANSPORT", 00:20:25.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.766 "adrfam": "ipv4", 00:20:25.766 "trsvcid": "$NVMF_PORT", 00:20:25.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.766 "hdgst": ${hdgst:-false}, 00:20:25.766 "ddgst": ${ddgst:-false} 00:20:25.766 }, 00:20:25.766 "method": "bdev_nvme_attach_controller" 00:20:25.766 } 00:20:25.766 EOF 00:20:25.766 )") 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.766 { 00:20:25.766 "params": { 00:20:25.766 "name": "Nvme$subsystem", 00:20:25.766 "trtype": "$TEST_TRANSPORT", 00:20:25.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.766 "adrfam": "ipv4", 00:20:25.766 "trsvcid": "$NVMF_PORT", 00:20:25.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.766 "hdgst": ${hdgst:-false}, 00:20:25.766 "ddgst": ${ddgst:-false} 00:20:25.766 }, 00:20:25.766 "method": "bdev_nvme_attach_controller" 00:20:25.766 } 00:20:25.766 EOF 00:20:25.766 )") 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:25.766 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.767 { 00:20:25.767 "params": { 00:20:25.767 "name": "Nvme$subsystem", 00:20:25.767 "trtype": "$TEST_TRANSPORT", 00:20:25.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.767 "adrfam": "ipv4", 00:20:25.767 "trsvcid": "$NVMF_PORT", 00:20:25.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.767 "hdgst": ${hdgst:-false}, 00:20:25.767 "ddgst": ${ddgst:-false} 00:20:25.767 }, 00:20:25.767 "method": "bdev_nvme_attach_controller" 00:20:25.767 } 00:20:25.767 EOF 00:20:25.767 )") 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1952028 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.767 "params": { 00:20:25.767 "name": "Nvme1", 00:20:25.767 "trtype": "tcp", 00:20:25.767 "traddr": "10.0.0.2", 00:20:25.767 "adrfam": "ipv4", 00:20:25.767 "trsvcid": "4420", 00:20:25.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.767 "hdgst": false, 00:20:25.767 "ddgst": false 00:20:25.767 }, 00:20:25.767 "method": "bdev_nvme_attach_controller" 00:20:25.767 }' 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.767 "params": { 00:20:25.767 "name": "Nvme1", 00:20:25.767 "trtype": "tcp", 00:20:25.767 "traddr": "10.0.0.2", 00:20:25.767 "adrfam": "ipv4", 00:20:25.767 "trsvcid": "4420", 00:20:25.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.767 "hdgst": false, 00:20:25.767 "ddgst": false 00:20:25.767 }, 00:20:25.767 "method": "bdev_nvme_attach_controller" 00:20:25.767 }' 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.767 "params": { 00:20:25.767 "name": "Nvme1", 00:20:25.767 "trtype": "tcp", 00:20:25.767 "traddr": "10.0.0.2", 00:20:25.767 "adrfam": "ipv4", 00:20:25.767 "trsvcid": "4420", 00:20:25.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.767 "hdgst": false, 00:20:25.767 "ddgst": false 00:20:25.767 }, 00:20:25.767 "method": "bdev_nvme_attach_controller" 00:20:25.767 }' 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:25.767 10:36:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.767 "params": { 00:20:25.767 "name": "Nvme1", 00:20:25.767 "trtype": "tcp", 00:20:25.767 "traddr": "10.0.0.2", 00:20:25.767 "adrfam": "ipv4", 00:20:25.767 "trsvcid": "4420", 00:20:25.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.767 "hdgst": false, 00:20:25.767 "ddgst": false 00:20:25.767 }, 00:20:25.767 "method": "bdev_nvme_attach_controller" 00:20:25.767 }' 00:20:25.767 [2024-07-22 10:36:31.312473] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:25.767 [2024-07-22 10:36:31.312529] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:25.767 [2024-07-22 10:36:31.314346] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:25.767 [2024-07-22 10:36:31.314393] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:25.767 [2024-07-22 10:36:31.315945] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:25.767 [2024-07-22 10:36:31.315992] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:25.767 [2024-07-22 10:36:31.316344] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:25.767 [2024-07-22 10:36:31.316387] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:25.767 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.767 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.029 [2024-07-22 10:36:31.466803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.029 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.029 [2024-07-22 10:36:31.484668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.029 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.029 [2024-07-22 10:36:31.525864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.029 [2024-07-22 10:36:31.545303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:26.029 [2024-07-22 10:36:31.571075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.029 [2024-07-22 10:36:31.589235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:26.029 [2024-07-22 10:36:31.620151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.029 [2024-07-22 10:36:31.638410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:20:26.029 Running I/O for 1 seconds... 00:20:26.288 Running I/O for 1 seconds... 00:20:26.288 Running I/O for 1 seconds... 00:20:26.288 Running I/O for 1 seconds... 00:20:27.227 00:20:27.227 Latency(us) 00:20:27.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.227 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:27.227 Nvme1n1 : 1.01 17729.91 69.26 0.00 0.00 7197.36 4778.67 14964.05 00:20:27.227 =================================================================================================================== 00:20:27.227 Total : 17729.91 69.26 0.00 0.00 7197.36 4778.67 14964.05 00:20:27.227 00:20:27.227 Latency(us) 00:20:27.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.227 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:27.227 Nvme1n1 : 1.00 187510.00 732.46 0.00 0.00 680.15 273.07 798.72 00:20:27.227 =================================================================================================================== 00:20:27.227 Total : 187510.00 732.46 0.00 0.00 680.15 273.07 798.72 00:20:27.227 00:20:27.227 Latency(us) 00:20:27.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.227 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:27.227 Nvme1n1 : 1.01 11868.86 46.36 0.00 0.00 10749.26 5597.87 21408.43 00:20:27.227 =================================================================================================================== 00:20:27.227 Total : 11868.86 46.36 0.00 0.00 10749.26 5597.87 21408.43 00:20:27.227 00:20:27.227 Latency(us) 00:20:27.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.227 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:27.227 Nvme1n1 : 1.00 13046.79 50.96 0.00 0.00 9783.49 4560.21 20206.93 00:20:27.227 =================================================================================================================== 00:20:27.227 Total : 13046.79 50.96 0.00 0.00 9783.49 4560.21 20206.93 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1952030 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1952032 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1952035 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.487 rmmod nvme_tcp 00:20:27.487 rmmod nvme_fabrics 00:20:27.487 rmmod nvme_keyring 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1951894 ']' 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1951894 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1951894 ']' 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1951894 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.487 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1951894 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1951894' 00:20:27.746 killing process with pid 1951894 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1951894 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1951894 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.746 10:36:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.289 10:36:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.289 00:20:30.289 real 0m13.608s 00:20:30.289 user 0m19.093s 00:20:30.289 sys 0m7.665s 00:20:30.289 10:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.289 10:36:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:30.289 ************************************ 00:20:30.289 END TEST nvmf_bdev_io_wait 00:20:30.289 ************************************ 00:20:30.289 10:36:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:30.289 10:36:35 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:30.289 10:36:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.289 10:36:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.289 10:36:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.289 ************************************ 00:20:30.289 START TEST nvmf_queue_depth 00:20:30.289 ************************************ 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:30.289 * Looking for test storage... 00:20:30.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.289 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.290 10:36:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:38.428 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:38.428 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:38.428 Found net devices under 0000:31:00.0: cvl_0_0 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:38.428 Found net devices under 0000:31:00.1: cvl_0_1 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:20:38.428 00:20:38.428 --- 10.0.0.2 ping statistics --- 00:20:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.428 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:20:38.428 00:20:38.428 --- 10.0.0.1 ping statistics --- 00:20:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.428 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1957117 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1957117 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1957117 ']' 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.428 10:36:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:38.428 [2024-07-22 10:36:43.973228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:38.428 [2024-07-22 10:36:43.973293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.428 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.428 [2024-07-22 10:36:44.067989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.428 [2024-07-22 10:36:44.114182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.428 [2024-07-22 10:36:44.114238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.428 [2024-07-22 10:36:44.114246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.428 [2024-07-22 10:36:44.114253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.428 [2024-07-22 10:36:44.114259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.428 [2024-07-22 10:36:44.114295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 [2024-07-22 10:36:44.805086] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 Malloc0 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 [2024-07-22 10:36:44.884693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1957415 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1957415 /var/tmp/bdevperf.sock 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1957415 ']' 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.369 10:36:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 [2024-07-22 10:36:44.940385] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:20:39.369 [2024-07-22 10:36:44.940456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957415 ] 00:20:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.369 [2024-07-22 10:36:45.010749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.369 [2024-07-22 10:36:45.049700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:40.310 NVMe0n1 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.310 10:36:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.310 Running I/O for 10 seconds... 00:20:50.442 00:20:50.442 Latency(us) 00:20:50.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.442 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:50.442 Verification LBA range: start 0x0 length 0x4000 00:20:50.442 NVMe0n1 : 10.05 11546.73 45.10 0.00 0.00 88324.44 10813.44 63351.47 00:20:50.442 =================================================================================================================== 00:20:50.442 Total : 11546.73 45.10 0.00 0.00 88324.44 10813.44 63351.47 00:20:50.442 0 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1957415 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1957415 ']' 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1957415 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1957415 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1957415' 00:20:50.442 killing process with pid 1957415 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1957415 00:20:50.442 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.442 00:20:50.442 Latency(us) 00:20:50.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.442 =================================================================================================================== 00:20:50.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.442 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1957415 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.703 rmmod nvme_tcp 00:20:50.703 rmmod nvme_fabrics 00:20:50.703 rmmod nvme_keyring 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1957117 ']' 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1957117 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1957117 ']' 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1957117 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1957117 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1957117' 00:20:50.703 killing process with pid 1957117 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1957117 00:20:50.703 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1957117 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.964 10:36:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.876 10:36:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.137 00:20:53.137 real 0m23.095s 00:20:53.137 user 0m25.912s 00:20:53.137 sys 0m7.285s 00:20:53.137 10:36:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.137 10:36:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:53.137 ************************************ 00:20:53.137 END TEST nvmf_queue_depth 00:20:53.137 ************************************ 00:20:53.137 10:36:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:53.137 10:36:58 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:53.137 10:36:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:53.137 10:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.137 10:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:53.137 ************************************ 00:20:53.137 START TEST nvmf_target_multipath 00:20:53.137 ************************************ 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:53.137 * Looking for test storage... 00:20:53.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.137 10:36:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.138 10:36:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:01.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:01.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:01.281 Found net devices under 0000:31:00.0: cvl_0_0 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.281 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:01.282 Found net devices under 0000:31:00.1: cvl_0_1 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:21:01.282 00:21:01.282 --- 10.0.0.2 ping statistics --- 00:21:01.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.282 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:01.282 00:21:01.282 --- 10.0.0.1 ping statistics --- 00:21:01.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.282 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:21:01.282 only one NIC for nvmf test 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.282 10:37:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.543 rmmod nvme_tcp 00:21:01.543 rmmod nvme_fabrics 00:21:01.543 rmmod nvme_keyring 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.543 10:37:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.452 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.712 00:21:03.712 real 0m10.509s 00:21:03.712 user 0m2.311s 00:21:03.712 sys 0m6.077s 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.712 10:37:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:03.712 ************************************ 00:21:03.712 END TEST nvmf_target_multipath 00:21:03.712 ************************************ 00:21:03.712 10:37:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:03.712 10:37:09 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:03.712 10:37:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:03.712 10:37:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.712 10:37:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.712 ************************************ 00:21:03.712 START TEST nvmf_zcopy 00:21:03.712 ************************************ 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:03.712 * Looking for test storage... 00:21:03.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.712 10:37:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:11.853 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.853 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:11.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:11.854 Found net devices under 0000:31:00.0: cvl_0_0 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:11.854 Found net devices under 0000:31:00.1: cvl_0_1 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.854 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:21:12.114 00:21:12.114 --- 10.0.0.2 ping statistics --- 00:21:12.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.114 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:21:12.114 00:21:12.114 --- 10.0.0.1 ping statistics --- 00:21:12.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.114 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.114 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1969091 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1969091 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1969091 ']' 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.115 10:37:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.115 [2024-07-22 10:37:17.704072] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:21:12.115 [2024-07-22 10:37:17.704133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.115 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.115 [2024-07-22 10:37:17.798363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.376 [2024-07-22 10:37:17.844585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.376 [2024-07-22 10:37:17.844633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.376 [2024-07-22 10:37:17.844641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.376 [2024-07-22 10:37:17.844648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.376 [2024-07-22 10:37:17.844653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.376 [2024-07-22 10:37:17.844675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 [2024-07-22 10:37:18.557670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 [2024-07-22 10:37:18.573881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 malloc0 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.947 { 00:21:12.947 "params": { 00:21:12.947 "name": "Nvme$subsystem", 00:21:12.947 "trtype": "$TEST_TRANSPORT", 00:21:12.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.947 "adrfam": "ipv4", 00:21:12.947 "trsvcid": "$NVMF_PORT", 00:21:12.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.947 "hdgst": ${hdgst:-false}, 00:21:12.947 "ddgst": ${ddgst:-false} 00:21:12.947 }, 00:21:12.947 "method": "bdev_nvme_attach_controller" 00:21:12.947 } 00:21:12.947 EOF 00:21:12.947 )") 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:12.947 10:37:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:12.947 "params": { 00:21:12.947 "name": "Nvme1", 00:21:12.947 "trtype": "tcp", 00:21:12.947 "traddr": "10.0.0.2", 00:21:12.947 "adrfam": "ipv4", 00:21:12.947 "trsvcid": "4420", 00:21:12.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.947 "hdgst": false, 00:21:12.947 "ddgst": false 00:21:12.947 }, 00:21:12.947 "method": "bdev_nvme_attach_controller" 00:21:12.947 }' 00:21:13.207 [2024-07-22 10:37:18.660081] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:21:13.207 [2024-07-22 10:37:18.660146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969124 ] 00:21:13.207 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.207 [2024-07-22 10:37:18.733992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.207 [2024-07-22 10:37:18.772907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.467 Running I/O for 10 seconds... 00:21:23.456 00:21:23.456 Latency(us) 00:21:23.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.456 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:23.456 Verification LBA range: start 0x0 length 0x1000 00:21:23.456 Nvme1n1 : 10.05 9257.68 72.33 0.00 0.00 13723.46 2129.92 43035.31 00:21:23.456 =================================================================================================================== 00:21:23.456 Total : 9257.68 72.33 0.00 0.00 13723.46 2129.92 43035.31 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1971152 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.718 [2024-07-22 10:37:29.162698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.162726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.718 { 00:21:23.718 "params": { 00:21:23.718 "name": "Nvme$subsystem", 00:21:23.718 "trtype": "$TEST_TRANSPORT", 00:21:23.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.718 "adrfam": "ipv4", 00:21:23.718 "trsvcid": "$NVMF_PORT", 00:21:23.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.718 "hdgst": ${hdgst:-false}, 00:21:23.718 "ddgst": ${ddgst:-false} 00:21:23.718 }, 00:21:23.718 "method": "bdev_nvme_attach_controller" 00:21:23.718 } 00:21:23.718 EOF 00:21:23.718 )") 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:23.718 [2024-07-22 10:37:29.170686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.170697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:23.718 10:37:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.718 "params": { 00:21:23.718 "name": "Nvme1", 00:21:23.718 "trtype": "tcp", 00:21:23.718 "traddr": "10.0.0.2", 00:21:23.718 "adrfam": "ipv4", 00:21:23.718 "trsvcid": "4420", 00:21:23.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.718 "hdgst": false, 00:21:23.718 "ddgst": false 00:21:23.718 }, 00:21:23.718 "method": "bdev_nvme_attach_controller" 00:21:23.718 }' 00:21:23.718 [2024-07-22 10:37:29.178750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.178758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.186724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.186732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.194744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.194752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.202764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.202772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.206873] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:21:23.718 [2024-07-22 10:37:29.206921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971152 ] 00:21:23.718 [2024-07-22 10:37:29.210785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.210793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.218807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.218816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.226828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.226836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.234849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.234857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.718 [2024-07-22 10:37:29.242871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.242879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.250891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.250899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.258910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.258917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.266929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.266937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.270275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.718 [2024-07-22 10:37:29.274950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.274958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.282970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.282979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.290991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.291004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.299009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.299018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.300879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.718 [2024-07-22 10:37:29.307030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.307038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.315058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.315070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.323078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.323089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.331096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.331105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.339115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.339123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.347136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.347145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.355156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.355164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.363177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.363185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.371207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.371223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.379224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.379234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.387243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.387253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.395266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.395276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.403284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.403292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.718 [2024-07-22 10:37:29.411305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.718 [2024-07-22 10:37:29.411314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.419325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.419333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.427346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.427354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.435368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.435376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.443389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.443403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.451414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.451424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.459435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.459445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.467452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.467462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.475474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.475487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 Running I/O for 5 seconds... 00:21:23.979 [2024-07-22 10:37:29.483491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.483500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.495657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.495674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.502466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.502482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.511795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.511812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.520931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.520947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.529538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.529555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.537968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.537988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.546857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.546873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.555387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.555407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.563934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.563950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.572776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.572791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.581467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.581483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.590496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.590511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.598983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.598998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.607529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.607544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.616281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.616297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.624606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.624622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.633439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.633454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.642327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.642342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.651088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.651103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.659794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.659809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:23.979 [2024-07-22 10:37:29.668634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:23.979 [2024-07-22 10:37:29.668649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.677501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.677517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.686473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.686487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.694964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.694980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.703804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.703823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.712836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.712851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.721829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.721845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.730647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.730662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.739217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.739233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.747989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.748004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.756834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.756850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.765676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.765692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.774368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.774384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.783362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.783378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.791876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.791891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.800425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.800441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.809166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.809181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.817514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.817529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.826277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.826293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.835407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.835423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.843927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.843942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.852252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.852267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.861119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.861134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.869427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.869445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.878392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.878411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.887291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.887306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.896189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.896204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.905071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.905085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.914401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.914417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.923465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.923481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.239 [2024-07-22 10:37:29.932111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.239 [2024-07-22 10:37:29.932126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.940714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.940729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.949434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.949449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.958082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.958097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.966654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.966669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.975755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.975770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.984370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.984384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:29.993053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:29.993067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.002673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.002689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.011794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.011808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.020805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.020820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.029893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.029909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.038846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.038865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.047568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.047584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.056586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.056602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.065363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.065377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.074492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.074507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.083529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.083545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.091838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.091853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.100496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.100511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.109521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.109536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.118634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.118649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.127663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.127678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.136787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.136802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.145818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.145833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.154043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.154058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.162337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.162352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.170645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.170660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.179077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.179092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.187962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.187977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.499 [2024-07-22 10:37:30.196536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.499 [2024-07-22 10:37:30.196551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.205495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.205510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.214456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.214471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.223157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.223172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.232160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.232174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.241205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.241220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.250079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.250093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.259088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.259103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.268263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.268278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.276511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.276525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.285219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.285234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.294232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.294247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.302593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.302608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.310829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.310844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.319893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.319908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.328481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.328496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.337025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.337040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.345829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.345844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.354965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.354979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.363903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.363918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.373045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.373060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.381675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.381690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.390850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.390864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.399110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.399125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.408132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.408146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.417156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.417170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.425231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.425245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.434102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.434117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.442793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.442809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:24.759 [2024-07-22 10:37:30.451780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:24.759 [2024-07-22 10:37:30.451794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.460262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.460277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.469332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.469347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.477929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.477944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.486676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.486692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.495848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.495863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.504736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.504751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.513137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.513152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.521815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.521830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.530606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.530621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.539015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.539029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.548222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.548238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.557388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.557409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.565941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.565956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.574322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.574337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.582907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.582923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.591967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.591982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.600662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.600676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.609006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.609021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.617811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.617826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.626193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.626208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.635529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.635544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.644346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.644360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.653486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.653501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.662014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.662028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.671102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.671117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.679522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.679537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.688512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.688527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.696978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.696996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.705841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.705856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.020 [2024-07-22 10:37:30.714608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.020 [2024-07-22 10:37:30.714623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.723458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.723473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.732237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.732251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.741494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.741509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.749951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.749966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.758870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.758885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.767513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.767528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.776738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.776753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.785213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.785228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.793641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.793657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.802709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.802724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.811206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.811221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.820246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.820261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.828854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.828869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.837112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.837127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.845079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.845094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.854066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.854081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.862446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.862465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.871205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.871219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.880235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.880250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.889256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.889271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.897801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.897816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.906253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.906268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.915365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.915380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.280 [2024-07-22 10:37:30.924427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.280 [2024-07-22 10:37:30.924442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.933298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.933314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.942239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.942255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.950783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.950798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.959269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.959284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.968278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.968293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.281 [2024-07-22 10:37:30.976470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.281 [2024-07-22 10:37:30.976484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:30.985174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:30.985189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:30.993686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:30.993701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.002477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.002493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.010947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.010962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.019510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.019525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.028630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.028649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.036492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.036507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.045490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.045505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.054551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.054567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.063347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.063362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.072012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.072027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.080638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.080653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.089553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.089569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.098226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.098241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.107192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.107207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.116054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.116069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.124807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.124822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.133779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.133793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.142651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.142666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.151144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.151158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.159698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.159713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.168427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.168442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.177300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.177315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.186385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.186405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.194700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.194719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.203399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.203414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.211631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.211646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.220718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.220732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.229458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.229473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.541 [2024-07-22 10:37:31.237727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.541 [2024-07-22 10:37:31.237742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.246764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.246779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.255338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.255354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.264246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.264262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.273269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.273284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.281518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.281533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.290456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.290471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.299403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.299418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.308112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.308127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.317056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.317071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.801 [2024-07-22 10:37:31.325426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.801 [2024-07-22 10:37:31.325441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.334173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.334189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.343194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.343208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.351550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.351565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.360455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.360470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.369446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.369461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.378517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.378532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.387141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.387156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.395739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.395755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.404764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.404780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.413246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.413261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.421964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.421979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.430082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.430097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.438777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.438793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.447832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.447848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.456774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.456789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.465871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.465887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.474899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.474914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.483307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.483323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:25.802 [2024-07-22 10:37:31.492565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:25.802 [2024-07-22 10:37:31.492581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.501749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.501765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.510012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.510027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.518821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.518837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.527452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.527467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.535915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.535930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.544704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.544718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.553492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.553507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.562013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.562027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.570712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.061 [2024-07-22 10:37:31.570728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.061 [2024-07-22 10:37:31.579760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.579775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.588044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.588059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.596821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.596836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.605684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.605699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.614785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.614800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.623913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.623928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.632946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.632961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.641272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.641287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.649686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.649701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.658325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.658340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.667309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.667324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.676044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.676059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.684924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.684939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.692755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.692770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.702120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.702135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.710638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.710653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.719726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.719741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.728413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.728428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.737447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.737462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.746575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.746590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.062 [2024-07-22 10:37:31.755356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.062 [2024-07-22 10:37:31.755370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.764258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.764273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.772977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.772991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.781993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.782008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.790634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.790648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.799322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.799337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.808572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.808587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.816879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.816895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.825832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.825847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.834698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.834713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.843173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.843188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.852062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.852076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.861091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.861106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.870128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.870143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.878770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.878785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.887474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.887489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.896070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.896085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.904647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.904661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.913037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.913052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.922092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.922107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.930737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.930751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.939227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.939242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.947606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.947621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.956265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.956279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.964921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.964935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.974106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.974120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.983215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.983230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:31.991300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:31.991315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:32.000024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:32.000039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:32.009124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:32.009140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.321 [2024-07-22 10:37:32.017474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.321 [2024-07-22 10:37:32.017492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.026332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.026347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.035380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.035398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.043943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.043958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.052501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.052515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.061392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.061412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.070466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.070481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.078961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.078976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.087837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.087851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.096562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.096576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.105490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.105504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.114320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.114334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.123089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.123104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.131274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.131289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.140028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.140043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.148581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.148596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.156964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.156978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.165890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.165904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.174369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.174383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.183088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.183106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.192263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.192278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.200830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.200844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.209721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.209736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.218033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.218047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.226616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.226631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.234886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.234900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.243454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.243468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.252332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.252346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.261100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.261114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.269150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.269165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.581 [2024-07-22 10:37:32.277816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.581 [2024-07-22 10:37:32.277830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.285951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.285966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.294447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.294461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.303275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.303290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.311789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.311804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.320321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.320335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.328610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.328626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.337574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.337589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.346041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.346059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.354811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.354826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.363746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.363760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.372743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.372757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.381692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.381706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.390184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.390199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.399197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.399211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.407689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.407704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.416903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.416917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.425474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.425489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.434699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.434714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.443762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.443776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.452576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.452591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.460933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.460947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.470040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.470055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.478709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.478723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.487649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.487663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.495967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.495982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.505008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.505023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.514012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.514030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.522607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.522622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:26.841 [2024-07-22 10:37:32.531304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:26.841 [2024-07-22 10:37:32.531319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.540475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.540490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.548999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.549014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.557873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.557889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.566890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.566906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.575253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.575268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.584188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.584203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.593187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.593203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.601814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.601829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.610448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.610463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.619077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.619091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.627817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.627832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.636857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.636872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.645493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.645509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.653962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.653977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.662380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.662400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.671040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.671055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.679980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.679995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.688404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.688419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.696995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.697010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.705657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.705672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.714584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.714599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.723365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.723381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.732111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.732126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.741175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.741190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.749701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.749716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.758308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.758323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.767276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.767291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.776299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.776315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.784811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.784825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.101 [2024-07-22 10:37:32.793718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.101 [2024-07-22 10:37:32.793732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.802115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.802130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.810990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.811006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.819543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.819558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.828341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.828356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.837284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.837299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.846187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.846202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.854626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.854641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.863598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.863613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.872353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.872368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.881253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.881268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.889783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.889798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.898572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.898587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.907496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.907511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.915951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.915966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.924961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.924975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.933455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.933470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.942105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.942120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.950651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.950665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.959253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.959267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.968360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.968375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.977574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.977590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.986131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.986146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:32.994879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:32.994894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.362 [2024-07-22 10:37:33.003791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.362 [2024-07-22 10:37:33.003806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.012767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.012783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.021305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.021320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.029853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.029868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.038590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.038605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.047234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.047249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.363 [2024-07-22 10:37:33.055845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.363 [2024-07-22 10:37:33.055860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.064556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.064571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.073508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.073523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.086965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.086980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.095386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.095406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.104682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.104697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.113638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.113652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.122566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.122581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.131524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.131539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.139946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.139961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.623 [2024-07-22 10:37:33.148454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.623 [2024-07-22 10:37:33.148469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.157674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.157690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.166535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.166550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.175794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.175809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.183703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.183718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.192523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.192537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.201117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.201133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.210080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.210095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.218603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.218618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.227635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.227649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.236519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.236534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.245461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.245476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.254667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.254681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.263128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.263142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.271954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.271969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.280845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.280859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.289224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.289239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.297517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.297531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.306273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.306288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.624 [2024-07-22 10:37:33.315200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.624 [2024-07-22 10:37:33.315214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.323890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.323905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.332206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.332221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.340847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.340865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.349815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.349830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.358726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.358740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.367312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.367327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.376351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.376366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.385475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.385490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.393899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.393913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.403002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.403018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.411793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.411808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.883 [2024-07-22 10:37:33.420527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.883 [2024-07-22 10:37:33.420541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.429378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.429393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.438419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.438434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.447287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.447302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.455900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.455914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.464611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.464626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.473732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.473746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.482570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.482585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.491100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.491115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.500060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.500075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.508915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.508933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.517642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.517657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.526744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.526759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.534832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.534847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.543615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.543630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.552497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.552512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.561406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.561421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.570222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.570237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.884 [2024-07-22 10:37:33.578721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:27.884 [2024-07-22 10:37:33.578736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.587776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.587791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.596713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.596728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.605125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.605139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.613733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.613748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.621925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.621940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.630344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.630359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.639019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.639034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.647528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.647542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.656635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.656650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.665658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.665673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.674024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.674043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.682966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.682981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.691936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.691951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.700900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.700914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.709282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.709296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.718175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.718190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.726808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.726822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.735544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.735559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.744542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.744557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.752808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.752823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.761144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.761159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.769836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.769850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.778630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.778645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.787574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.787589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.143 [2024-07-22 10:37:33.796487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.143 [2024-07-22 10:37:33.796502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.144 [2024-07-22 10:37:33.805225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.144 [2024-07-22 10:37:33.805239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.144 [2024-07-22 10:37:33.814461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.144 [2024-07-22 10:37:33.814475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.144 [2024-07-22 10:37:33.822988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.144 [2024-07-22 10:37:33.823002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.144 [2024-07-22 10:37:33.831705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.144 [2024-07-22 10:37:33.831719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.144 [2024-07-22 10:37:33.840670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.144 [2024-07-22 10:37:33.840688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.849056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.849071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.857848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.857862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.866868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.866883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.875141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.875155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.883826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.883841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.892253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.892267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.900968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.900983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.909707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.909721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.918870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.918884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.927473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.927487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.936284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.936298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.945242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.945256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.954307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.954321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.963167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.963181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.972197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.972212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.980622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.980636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.989377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.989391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:33.997799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:33.997813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:34.006718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:34.006733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:34.015512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:34.015527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:34.023926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:34.023940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:34.032470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.404 [2024-07-22 10:37:34.032485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.404 [2024-07-22 10:37:34.040313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.040328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.049863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.049878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.058784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.058799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.067908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.067923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.077011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.077026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.085571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.085586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.405 [2024-07-22 10:37:34.094399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.405 [2024-07-22 10:37:34.094414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.102734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.102749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.111663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.111678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.120196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.120211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.128858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.128872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.137698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.137712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.146813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.146828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.155712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.155727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.164461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.164476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.173356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.173371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.182427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.182442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.191438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.191460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.199976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.199991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.208759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.208774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.217840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.217854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.226651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.226666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.235699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.235714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.244190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.244205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.252807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.252822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.261909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.261924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.270848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.270864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.279343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.279358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.288665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.288680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.665 [2024-07-22 10:37:34.297209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.665 [2024-07-22 10:37:34.297224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.306272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.306287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.315307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.315322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.324441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.324457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.332204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.332219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.341019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.341034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.350233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.350247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.666 [2024-07-22 10:37:34.358767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.666 [2024-07-22 10:37:34.358783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.367451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.367466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.376673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.376688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.384737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.384752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.394093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.394109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.402727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.402741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.411281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.411296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.420393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.420414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.429327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.429343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.438467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.438482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.447433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.447448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.455719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.455734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.464508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.464524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.473239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.473255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.482508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.482524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.491189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.491204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.497308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.497322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 00:21:28.926 Latency(us) 00:21:28.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.926 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:28.926 Nvme1n1 : 5.00 19330.54 151.02 0.00 0.00 6615.14 2375.68 17367.04 00:21:28.926 =================================================================================================================== 00:21:28.926 Total : 19330.54 151.02 0.00 0.00 6615.14 2375.68 17367.04 00:21:28.926 [2024-07-22 10:37:34.505324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.505335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.513343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.513354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.521367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.521379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.529389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.529405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.537411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.537421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.545431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.545440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.553447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.553456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.561467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.561475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.569486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.569494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.577508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.577516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.585529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.585539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.593548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.926 [2024-07-22 10:37:34.593556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.926 [2024-07-22 10:37:34.601571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.927 [2024-07-22 10:37:34.601582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.927 [2024-07-22 10:37:34.609590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.927 [2024-07-22 10:37:34.609598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1971152) - No such process 00:21:28.927 10:37:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1971152 00:21:28.927 10:37:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:28.927 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.927 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:29.186 delay0 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.186 10:37:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:29.186 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.186 [2024-07-22 10:37:34.816589] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:35.763 Initializing NVMe Controllers 00:21:35.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:35.763 Initialization complete. Launching workers. 00:21:35.763 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 872 00:21:35.763 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1152, failed to submit 40 00:21:35.763 success 986, unsuccess 166, failed 0 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.763 10:37:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.763 rmmod nvme_tcp 00:21:35.763 rmmod nvme_fabrics 00:21:35.763 rmmod nvme_keyring 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1969091 ']' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1969091 ']' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1969091' 00:21:35.763 killing process with pid 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1969091 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.763 10:37:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.673 10:37:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.673 00:21:37.673 real 0m34.029s 00:21:37.673 user 0m45.151s 00:21:37.673 sys 0m11.029s 00:21:37.673 10:37:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.673 10:37:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:37.673 ************************************ 00:21:37.673 END TEST nvmf_zcopy 00:21:37.673 ************************************ 00:21:37.673 10:37:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.673 10:37:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:37.673 10:37:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.673 10:37:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.673 10:37:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.673 ************************************ 00:21:37.673 START TEST nvmf_nmic 00:21:37.673 ************************************ 00:21:37.673 10:37:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:37.932 * Looking for test storage... 00:21:37.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.932 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.933 10:37:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:46.144 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:46.144 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:46.144 Found net devices under 0000:31:00.0: cvl_0_0 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.144 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:46.145 Found net devices under 0000:31:00.1: cvl_0_1 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:21:46.145 00:21:46.145 --- 10.0.0.2 ping statistics --- 00:21:46.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.145 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:21:46.145 00:21:46.145 --- 10.0.0.1 ping statistics --- 00:21:46.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.145 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1978137 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1978137 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1978137 ']' 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.145 10:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.145 [2024-07-22 10:37:51.642444] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:21:46.145 [2024-07-22 10:37:51.642496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.145 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.145 [2024-07-22 10:37:51.714807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.145 [2024-07-22 10:37:51.747320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.145 [2024-07-22 10:37:51.747359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.145 [2024-07-22 10:37:51.747367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.145 [2024-07-22 10:37:51.747373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.145 [2024-07-22 10:37:51.747379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.145 [2024-07-22 10:37:51.747518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.145 [2024-07-22 10:37:51.747633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.145 [2024-07-22 10:37:51.747792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.145 [2024-07-22 10:37:51.747794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.712 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.712 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:21:46.712 10:37:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.712 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.712 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.971 [2024-07-22 10:37:52.452126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.971 Malloc0 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.971 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 [2024-07-22 10:37:52.511469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:46.972 test case1: single bdev can't be used in multiple subsystems 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 [2024-07-22 10:37:52.547385] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:46.972 [2024-07-22 10:37:52.547408] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:46.972 [2024-07-22 10:37:52.547416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:46.972 request: 00:21:46.972 { 00:21:46.972 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.972 "namespace": { 00:21:46.972 "bdev_name": "Malloc0", 00:21:46.972 "no_auto_visible": false 00:21:46.972 }, 00:21:46.972 "method": "nvmf_subsystem_add_ns", 00:21:46.972 "req_id": 1 00:21:46.972 } 00:21:46.972 Got JSON-RPC error response 00:21:46.972 response: 00:21:46.972 { 00:21:46.972 "code": -32602, 00:21:46.972 "message": "Invalid parameters" 00:21:46.972 } 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:46.972 Adding namespace failed - expected result. 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:46.972 test case2: host connect to nvmf target in multiple paths 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:46.972 [2024-07-22 10:37:52.559546] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.972 10:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:48.352 10:37:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:50.260 10:37:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:50.260 10:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:21:50.260 10:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.260 10:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:50.260 10:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:21:52.183 10:37:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:52.183 [global] 00:21:52.183 thread=1 00:21:52.183 invalidate=1 00:21:52.183 rw=write 00:21:52.183 time_based=1 00:21:52.183 runtime=1 00:21:52.183 ioengine=libaio 00:21:52.183 direct=1 00:21:52.183 bs=4096 00:21:52.183 iodepth=1 00:21:52.183 norandommap=0 00:21:52.183 numjobs=1 00:21:52.183 00:21:52.183 verify_dump=1 00:21:52.183 verify_backlog=512 00:21:52.183 verify_state_save=0 00:21:52.183 do_verify=1 00:21:52.183 verify=crc32c-intel 00:21:52.183 [job0] 00:21:52.183 filename=/dev/nvme0n1 00:21:52.183 Could not set queue depth (nvme0n1) 00:21:52.443 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:52.443 fio-3.35 00:21:52.443 Starting 1 thread 00:21:53.412 00:21:53.412 job0: (groupid=0, jobs=1): err= 0: pid=1979673: Mon Jul 22 10:37:59 2024 00:21:53.412 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:21:53.412 slat (nsec): min=24954, max=25821, avg=25226.59, stdev=237.43 00:21:53.412 clat (usec): min=1015, max=43022, avg=39699.65, stdev=9975.08 00:21:53.412 lat (usec): min=1040, max=43047, avg=39724.88, stdev=9975.09 00:21:53.412 clat percentiles (usec): 00:21:53.412 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41681], 20.00th=[41681], 00:21:53.412 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:53.412 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:21:53.412 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:53.412 | 99.99th=[43254] 00:21:53.412 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:21:53.412 slat (usec): min=9, max=25864, avg=79.07, stdev=1141.82 00:21:53.412 clat (usec): min=258, max=815, avg=577.55, stdev=101.86 00:21:53.412 lat (usec): min=268, max=26605, avg=656.62, stdev=1153.92 00:21:53.412 clat percentiles (usec): 00:21:53.412 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 482], 00:21:53.412 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 619], 00:21:53.412 | 70.00th=[ 652], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 734], 00:21:53.412 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:21:53.412 | 99.99th=[ 816] 00:21:53.412 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:21:53.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:53.412 lat (usec) : 500=26.65%, 750=68.05%, 1000=2.08% 00:21:53.412 lat (msec) : 2=0.19%, 50=3.02% 00:21:53.412 cpu : usr=1.28%, sys=0.79%, ctx=532, majf=0, minf=1 00:21:53.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.412 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:53.412 00:21:53.412 Run status group 0 (all jobs): 00:21:53.412 READ: bw=66.9KiB/s (68.5kB/s), 66.9KiB/s-66.9KiB/s (68.5kB/s-68.5kB/s), io=68.0KiB (69.6kB), run=1016-1016msec 00:21:53.412 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:21:53.412 00:21:53.412 Disk stats (read/write): 00:21:53.412 nvme0n1: ios=39/512, merge=0/0, ticks=1514/286, in_queue=1800, util=98.80% 00:21:53.412 10:37:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:53.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.672 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.672 rmmod nvme_tcp 00:21:53.672 rmmod nvme_fabrics 00:21:53.672 rmmod nvme_keyring 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1978137 ']' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1978137 ']' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1978137' 00:21:53.932 killing process with pid 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1978137 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.932 10:37:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.475 10:38:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.475 00:21:56.475 real 0m18.295s 00:21:56.475 user 0m47.394s 00:21:56.475 sys 0m6.601s 00:21:56.475 10:38:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.475 10:38:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:56.475 ************************************ 00:21:56.475 END TEST nvmf_nmic 00:21:56.475 ************************************ 00:21:56.475 10:38:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:56.475 10:38:01 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:56.475 10:38:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:56.475 10:38:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.475 10:38:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:56.475 ************************************ 00:21:56.475 START TEST nvmf_fio_target 00:21:56.475 ************************************ 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:56.475 * Looking for test storage... 00:21:56.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.475 10:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.476 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.476 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.476 10:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.476 10:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:04.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:04.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.612 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:04.613 Found net devices under 0000:31:00.0: cvl_0_0 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:04.613 Found net devices under 0000:31:00.1: cvl_0_1 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:04.613 00:22:04.613 --- 10.0.0.2 ping statistics --- 00:22:04.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.613 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:22:04.613 00:22:04.613 --- 10.0.0.1 ping statistics --- 00:22:04.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.613 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1984511 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1984511 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1984511 ']' 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.613 10:38:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 [2024-07-22 10:38:09.936095] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:22:04.613 [2024-07-22 10:38:09.936162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.613 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.613 [2024-07-22 10:38:10.017263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.613 [2024-07-22 10:38:10.060235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.613 [2024-07-22 10:38:10.060280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.613 [2024-07-22 10:38:10.060293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.613 [2024-07-22 10:38:10.060300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.613 [2024-07-22 10:38:10.060305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.613 [2024-07-22 10:38:10.060464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.613 [2024-07-22 10:38:10.060515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.613 [2024-07-22 10:38:10.060778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.613 [2024-07-22 10:38:10.060779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.191 10:38:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:05.450 [2024-07-22 10:38:10.899486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.450 10:38:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:05.450 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:22:05.450 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:05.709 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:22:05.709 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:05.968 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:22:05.968 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:05.968 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:22:05.968 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:22:06.228 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:06.488 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:22:06.488 10:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:06.488 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:22:06.488 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:06.748 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:22:06.748 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:22:07.007 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:07.007 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:07.007 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.267 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:07.267 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:07.538 10:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.539 [2024-07-22 10:38:13.140477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.539 10:38:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:22:07.799 10:38:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:22:08.059 10:38:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:22:09.443 10:38:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:22:11.352 10:38:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:11.352 [global] 00:22:11.352 thread=1 00:22:11.352 invalidate=1 00:22:11.352 rw=write 00:22:11.352 time_based=1 00:22:11.352 runtime=1 00:22:11.352 ioengine=libaio 00:22:11.352 direct=1 00:22:11.352 bs=4096 00:22:11.352 iodepth=1 00:22:11.352 norandommap=0 00:22:11.352 numjobs=1 00:22:11.352 00:22:11.352 verify_dump=1 00:22:11.352 verify_backlog=512 00:22:11.352 verify_state_save=0 00:22:11.352 do_verify=1 00:22:11.352 verify=crc32c-intel 00:22:11.352 [job0] 00:22:11.352 filename=/dev/nvme0n1 00:22:11.352 [job1] 00:22:11.352 filename=/dev/nvme0n2 00:22:11.352 [job2] 00:22:11.352 filename=/dev/nvme0n3 00:22:11.352 [job3] 00:22:11.352 filename=/dev/nvme0n4 00:22:11.634 Could not set queue depth (nvme0n1) 00:22:11.634 Could not set queue depth (nvme0n2) 00:22:11.634 Could not set queue depth (nvme0n3) 00:22:11.634 Could not set queue depth (nvme0n4) 00:22:11.892 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:11.892 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:11.892 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:11.892 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:11.892 fio-3.35 00:22:11.892 Starting 4 threads 00:22:13.291 00:22:13.291 job0: (groupid=0, jobs=1): err= 0: pid=1986272: Mon Jul 22 10:38:18 2024 00:22:13.291 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:22:13.291 slat (nsec): min=6692, max=61526, avg=27278.86, stdev=3510.68 00:22:13.291 clat (usec): min=768, max=1197, avg=981.87, stdev=61.65 00:22:13.291 lat (usec): min=796, max=1225, avg=1009.15, stdev=61.63 00:22:13.291 clat percentiles (usec): 00:22:13.291 | 1.00th=[ 824], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 938], 00:22:13.291 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:22:13.291 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:22:13.291 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:22:13.291 | 99.99th=[ 1205] 00:22:13.291 write: IOPS=853, BW=3413KiB/s (3494kB/s)(3416KiB/1001msec); 0 zone resets 00:22:13.291 slat (nsec): min=2681, max=58325, avg=23972.56, stdev=12664.46 00:22:13.291 clat (usec): min=246, max=863, avg=530.81, stdev=119.58 00:22:13.291 lat (usec): min=249, max=877, avg=554.78, stdev=123.33 00:22:13.291 clat percentiles (usec): 00:22:13.291 | 1.00th=[ 277], 5.00th=[ 343], 10.00th=[ 371], 20.00th=[ 437], 00:22:13.291 | 30.00th=[ 465], 40.00th=[ 486], 50.00th=[ 523], 60.00th=[ 562], 00:22:13.291 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 734], 00:22:13.291 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 865], 99.95th=[ 865], 00:22:13.291 | 99.99th=[ 865] 00:22:13.291 bw ( KiB/s): min= 4096, max= 4096, per=42.44%, avg=4096.00, stdev= 0.00, samples=1 00:22:13.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:13.291 lat (usec) : 250=0.15%, 500=27.45%, 750=33.38%, 1000=24.67% 00:22:13.291 lat (msec) : 2=14.35% 00:22:13.291 cpu : usr=3.10%, sys=4.00%, ctx=1367, majf=0, minf=1 00:22:13.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:13.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.291 issued rwts: total=512,854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:13.291 job1: (groupid=0, jobs=1): err= 0: pid=1986273: Mon Jul 22 10:38:18 2024 00:22:13.291 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:22:13.291 slat (nsec): min=15150, max=35252, avg=15860.63, stdev=1857.36 00:22:13.291 clat (usec): min=794, max=1446, avg=1149.24, stdev=116.56 00:22:13.291 lat (usec): min=810, max=1462, avg=1165.10, stdev=116.49 00:22:13.291 clat percentiles (usec): 00:22:13.291 | 1.00th=[ 898], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1045], 00:22:13.291 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:22:13.291 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1352], 00:22:13.291 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[ 1450], 99.95th=[ 1450], 00:22:13.291 | 99.99th=[ 1450] 00:22:13.291 write: IOPS=611, BW=2446KiB/s (2504kB/s)(2448KiB/1001msec); 0 zone resets 00:22:13.291 slat (usec): min=4, max=370, avg=19.59, stdev=16.49 00:22:13.291 clat (usec): min=177, max=1121, avg=630.92, stdev=131.63 00:22:13.291 lat (usec): min=183, max=1141, avg=650.51, stdev=135.14 00:22:13.291 clat percentiles (usec): 00:22:13.291 | 1.00th=[ 330], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 529], 00:22:13.291 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:22:13.291 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 848], 00:22:13.291 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1123], 99.95th=[ 1123], 00:22:13.291 | 99.99th=[ 1123] 00:22:13.291 bw ( KiB/s): min= 4096, max= 4096, per=42.44%, avg=4096.00, stdev= 0.00, samples=1 00:22:13.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:13.291 lat (usec) : 250=0.18%, 500=8.01%, 750=37.72%, 1000=13.08% 00:22:13.291 lat (msec) : 2=41.01% 00:22:13.291 cpu : usr=1.50%, sys=3.00%, ctx=1128, majf=0, minf=1 00:22:13.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:13.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.291 issued rwts: total=512,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:13.291 job2: (groupid=0, jobs=1): err= 0: pid=1986274: Mon Jul 22 10:38:18 2024 00:22:13.291 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1029msec) 00:22:13.291 slat (nsec): min=7358, max=12007, avg=10985.17, stdev=1031.18 00:22:13.291 clat (usec): min=40984, max=42025, avg=41688.85, stdev=447.03 00:22:13.291 lat (usec): min=40995, max=42035, avg=41699.84, stdev=447.21 00:22:13.292 clat percentiles (usec): 00:22:13.292 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:13.292 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:22:13.292 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:13.292 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:13.292 | 99.99th=[42206] 00:22:13.292 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:22:13.292 slat (usec): min=6, max=42711, avg=132.48, stdev=1989.46 00:22:13.292 clat (usec): min=158, max=647, avg=405.96, stdev=75.71 00:22:13.292 lat (usec): min=166, max=43194, avg=538.44, stdev=1993.54 00:22:13.292 clat percentiles (usec): 00:22:13.292 | 1.00th=[ 262], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 351], 00:22:13.292 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 412], 00:22:13.292 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 529], 00:22:13.292 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 652], 00:22:13.292 | 99.99th=[ 652] 00:22:13.292 bw ( KiB/s): min= 4096, max= 4096, per=42.44%, avg=4096.00, stdev= 0.00, samples=1 00:22:13.292 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:13.292 lat (usec) : 250=0.94%, 500=84.15%, 750=11.51% 00:22:13.292 lat (msec) : 50=3.40% 00:22:13.292 cpu : usr=0.49%, sys=0.78%, ctx=534, majf=0, minf=1 00:22:13.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:13.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.292 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:13.292 job3: (groupid=0, jobs=1): err= 0: pid=1986275: Mon Jul 22 10:38:18 2024 00:22:13.292 read: IOPS=18, BW=73.6KiB/s (75.4kB/s)(76.0KiB/1032msec) 00:22:13.292 slat (nsec): min=26149, max=42891, avg=27500.37, stdev=3880.74 00:22:13.292 clat (usec): min=40913, max=42939, avg=41566.95, stdev=601.82 00:22:13.292 lat (usec): min=40939, max=42965, avg=41594.46, stdev=601.70 00:22:13.292 clat percentiles (usec): 00:22:13.292 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:13.292 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:22:13.292 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:22:13.292 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:22:13.292 | 99.99th=[42730] 00:22:13.292 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:22:13.292 slat (nsec): min=9733, max=67699, avg=28166.54, stdev=11250.13 00:22:13.292 clat (usec): min=169, max=633, avg=436.88, stdev=81.80 00:22:13.292 lat (usec): min=182, max=668, avg=465.05, stdev=87.37 00:22:13.292 clat percentiles (usec): 00:22:13.292 | 1.00th=[ 249], 5.00th=[ 285], 10.00th=[ 318], 20.00th=[ 363], 00:22:13.292 | 30.00th=[ 396], 40.00th=[ 429], 50.00th=[ 457], 60.00th=[ 474], 00:22:13.292 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 545], 00:22:13.292 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 635], 99.95th=[ 635], 00:22:13.292 | 99.99th=[ 635] 00:22:13.292 bw ( KiB/s): min= 4096, max= 4096, per=42.44%, avg=4096.00, stdev= 0.00, samples=1 00:22:13.292 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:13.292 lat (usec) : 250=1.13%, 500=72.69%, 750=22.60% 00:22:13.292 lat (msec) : 50=3.58% 00:22:13.292 cpu : usr=0.68%, sys=1.36%, ctx=532, majf=0, minf=1 00:22:13.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:13.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.292 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:13.292 00:22:13.292 Run status group 0 (all jobs): 00:22:13.292 READ: bw=4112KiB/s (4211kB/s), 70.0KiB/s-2046KiB/s (71.7kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1032msec 00:22:13.292 WRITE: bw=9651KiB/s (9883kB/s), 1984KiB/s-3413KiB/s (2032kB/s-3494kB/s), io=9960KiB (10.2MB), run=1001-1032msec 00:22:13.292 00:22:13.292 Disk stats (read/write): 00:22:13.292 nvme0n1: ios=561/590, merge=0/0, ticks=752/255, in_queue=1007, util=84.27% 00:22:13.292 nvme0n2: ios=479/512, merge=0/0, ticks=537/259, in_queue=796, util=90.92% 00:22:13.292 nvme0n3: ios=67/512, merge=0/0, ticks=1077/207, in_queue=1284, util=95.04% 00:22:13.292 nvme0n4: ios=36/512, merge=0/0, ticks=1468/222, in_queue=1690, util=94.12% 00:22:13.292 10:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:13.292 [global] 00:22:13.292 thread=1 00:22:13.292 invalidate=1 00:22:13.292 rw=randwrite 00:22:13.292 time_based=1 00:22:13.292 runtime=1 00:22:13.292 ioengine=libaio 00:22:13.292 direct=1 00:22:13.292 bs=4096 00:22:13.292 iodepth=1 00:22:13.292 norandommap=0 00:22:13.292 numjobs=1 00:22:13.292 00:22:13.292 verify_dump=1 00:22:13.292 verify_backlog=512 00:22:13.292 verify_state_save=0 00:22:13.292 do_verify=1 00:22:13.292 verify=crc32c-intel 00:22:13.292 [job0] 00:22:13.292 filename=/dev/nvme0n1 00:22:13.292 [job1] 00:22:13.292 filename=/dev/nvme0n2 00:22:13.292 [job2] 00:22:13.292 filename=/dev/nvme0n3 00:22:13.292 [job3] 00:22:13.292 filename=/dev/nvme0n4 00:22:13.292 Could not set queue depth (nvme0n1) 00:22:13.292 Could not set queue depth (nvme0n2) 00:22:13.292 Could not set queue depth (nvme0n3) 00:22:13.292 Could not set queue depth (nvme0n4) 00:22:13.554 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:13.554 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:13.554 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:13.554 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:13.554 fio-3.35 00:22:13.554 Starting 4 threads 00:22:14.969 00:22:14.969 job0: (groupid=0, jobs=1): err= 0: pid=1986799: Mon Jul 22 10:38:20 2024 00:22:14.969 read: IOPS=18, BW=74.2KiB/s (76.0kB/s)(76.0KiB/1024msec) 00:22:14.969 slat (nsec): min=7869, max=26171, avg=23897.26, stdev=5260.79 00:22:14.969 clat (usec): min=810, max=42971, avg=39890.71, stdev=9469.38 00:22:14.969 lat (usec): min=820, max=42997, avg=39914.60, stdev=9472.73 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[41681], 20.00th=[41681], 00:22:14.969 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:14.969 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:22:14.969 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:22:14.969 | 99.99th=[42730] 00:22:14.969 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:22:14.969 slat (nsec): min=9178, max=49292, avg=20367.57, stdev=11397.49 00:22:14.969 clat (usec): min=139, max=1203, avg=492.07, stdev=189.32 00:22:14.969 lat (usec): min=149, max=1221, avg=512.43, stdev=197.67 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 293], 00:22:14.969 | 30.00th=[ 347], 40.00th=[ 408], 50.00th=[ 453], 60.00th=[ 537], 00:22:14.969 | 70.00th=[ 619], 80.00th=[ 676], 90.00th=[ 766], 95.00th=[ 799], 00:22:14.969 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 1205], 99.95th=[ 1205], 00:22:14.969 | 99.99th=[ 1205] 00:22:14.969 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:22:14.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:14.969 lat (usec) : 250=6.21%, 500=45.76%, 750=33.52%, 1000=10.92% 00:22:14.969 lat (msec) : 2=0.19%, 50=3.39% 00:22:14.969 cpu : usr=0.49%, sys=1.08%, ctx=533, majf=0, minf=1 00:22:14.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:14.969 job1: (groupid=0, jobs=1): err= 0: pid=1986800: Mon Jul 22 10:38:20 2024 00:22:14.969 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:22:14.969 slat (nsec): min=25889, max=44473, avg=27481.50, stdev=4290.12 00:22:14.969 clat (usec): min=40886, max=42419, avg=41375.82, stdev=545.31 00:22:14.969 lat (usec): min=40912, max=42463, avg=41403.30, stdev=547.26 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:22:14.969 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:22:14.969 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:14.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:14.969 | 99.99th=[42206] 00:22:14.969 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:22:14.969 slat (nsec): min=8640, max=52964, avg=30845.03, stdev=8263.68 00:22:14.969 clat (usec): min=224, max=752, avg=462.97, stdev=104.20 00:22:14.969 lat (usec): min=245, max=785, avg=493.81, stdev=106.26 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 243], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 355], 00:22:14.969 | 30.00th=[ 392], 40.00th=[ 445], 50.00th=[ 469], 60.00th=[ 490], 00:22:14.969 | 70.00th=[ 523], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 635], 00:22:14.969 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 750], 99.95th=[ 750], 00:22:14.969 | 99.99th=[ 750] 00:22:14.969 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:22:14.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:14.969 lat (usec) : 250=1.13%, 500=62.08%, 750=33.21%, 1000=0.19% 00:22:14.969 lat (msec) : 50=3.40% 00:22:14.969 cpu : usr=1.00%, sys=2.10%, ctx=532, majf=0, minf=1 00:22:14.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:14.969 job2: (groupid=0, jobs=1): err= 0: pid=1986801: Mon Jul 22 10:38:20 2024 00:22:14.969 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:22:14.969 slat (nsec): min=24699, max=30819, avg=25590.56, stdev=1532.69 00:22:14.969 clat (usec): min=1138, max=42237, avg=39640.25, stdev=9612.38 00:22:14.969 lat (usec): min=1163, max=42263, avg=39665.84, stdev=9612.53 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:22:14.969 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:14.969 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:14.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:14.969 | 99.99th=[42206] 00:22:14.969 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:22:14.969 slat (nsec): min=9517, max=52031, avg=27733.97, stdev=9675.05 00:22:14.969 clat (usec): min=228, max=867, avg=584.45, stdev=141.38 00:22:14.969 lat (usec): min=260, max=899, avg=612.18, stdev=144.91 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 269], 5.00th=[ 355], 10.00th=[ 383], 20.00th=[ 445], 00:22:14.969 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:22:14.969 | 70.00th=[ 668], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 791], 00:22:14.969 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:22:14.969 | 99.99th=[ 865] 00:22:14.969 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:22:14.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:14.969 lat (usec) : 250=0.19%, 500=26.60%, 750=56.04%, 1000=13.77% 00:22:14.969 lat (msec) : 2=0.19%, 50=3.21% 00:22:14.969 cpu : usr=0.68%, sys=1.36%, ctx=531, majf=0, minf=1 00:22:14.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:14.969 job3: (groupid=0, jobs=1): err= 0: pid=1986802: Mon Jul 22 10:38:20 2024 00:22:14.969 read: IOPS=178, BW=715KiB/s (732kB/s)(716KiB/1001msec) 00:22:14.969 slat (nsec): min=6859, max=62878, avg=26169.90, stdev=5208.99 00:22:14.969 clat (usec): min=390, max=42465, avg=4142.57, stdev=11019.77 00:22:14.969 lat (usec): min=417, max=42490, avg=4168.74, stdev=11019.48 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 445], 5.00th=[ 586], 10.00th=[ 709], 20.00th=[ 799], 00:22:14.969 | 30.00th=[ 938], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:22:14.969 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[42206], 00:22:14.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:14.969 | 99.99th=[42206] 00:22:14.969 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:22:14.969 slat (nsec): min=8762, max=50422, avg=30075.48, stdev=8391.93 00:22:14.969 clat (usec): min=151, max=894, avg=456.70, stdev=122.45 00:22:14.969 lat (usec): min=160, max=940, avg=486.78, stdev=125.24 00:22:14.969 clat percentiles (usec): 00:22:14.969 | 1.00th=[ 221], 5.00th=[ 262], 10.00th=[ 310], 20.00th=[ 347], 00:22:14.969 | 30.00th=[ 375], 40.00th=[ 429], 50.00th=[ 461], 60.00th=[ 486], 00:22:14.969 | 70.00th=[ 519], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 660], 00:22:14.969 | 99.00th=[ 734], 99.50th=[ 783], 99.90th=[ 898], 99.95th=[ 898], 00:22:14.969 | 99.99th=[ 898] 00:22:14.969 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:22:14.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:14.969 lat (usec) : 250=2.75%, 500=45.73%, 750=28.94%, 1000=8.83% 00:22:14.969 lat (msec) : 2=11.72%, 50=2.03% 00:22:14.969 cpu : usr=1.70%, sys=2.40%, ctx=692, majf=0, minf=1 00:22:14.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.969 issued rwts: total=179,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:14.969 00:22:14.969 Run status group 0 (all jobs): 00:22:14.969 READ: bw=907KiB/s (929kB/s), 69.8KiB/s-715KiB/s (71.4kB/s-732kB/s), io=936KiB (958kB), run=1001-1032msec 00:22:14.969 WRITE: bw=7938KiB/s (8128kB/s), 1984KiB/s-2046KiB/s (2032kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1032msec 00:22:14.969 00:22:14.969 Disk stats (read/write): 00:22:14.969 nvme0n1: ios=65/512, merge=0/0, ticks=860/244, in_queue=1104, util=92.89% 00:22:14.969 nvme0n2: ios=58/512, merge=0/0, ticks=762/182, in_queue=944, util=99.49% 00:22:14.969 nvme0n3: ios=37/512, merge=0/0, ticks=1450/282, in_queue=1732, util=97.26% 00:22:14.969 nvme0n4: ios=81/512, merge=0/0, ticks=1490/188, in_queue=1678, util=97.44% 00:22:14.969 10:38:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:14.969 [global] 00:22:14.969 thread=1 00:22:14.969 invalidate=1 00:22:14.969 rw=write 00:22:14.969 time_based=1 00:22:14.969 runtime=1 00:22:14.969 ioengine=libaio 00:22:14.969 direct=1 00:22:14.969 bs=4096 00:22:14.969 iodepth=128 00:22:14.969 norandommap=0 00:22:14.969 numjobs=1 00:22:14.969 00:22:14.969 verify_dump=1 00:22:14.969 verify_backlog=512 00:22:14.969 verify_state_save=0 00:22:14.969 do_verify=1 00:22:14.969 verify=crc32c-intel 00:22:14.969 [job0] 00:22:14.969 filename=/dev/nvme0n1 00:22:14.969 [job1] 00:22:14.969 filename=/dev/nvme0n2 00:22:14.969 [job2] 00:22:14.969 filename=/dev/nvme0n3 00:22:14.969 [job3] 00:22:14.969 filename=/dev/nvme0n4 00:22:14.969 Could not set queue depth (nvme0n1) 00:22:14.969 Could not set queue depth (nvme0n2) 00:22:14.969 Could not set queue depth (nvme0n3) 00:22:14.969 Could not set queue depth (nvme0n4) 00:22:15.234 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:15.234 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:15.234 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:15.234 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:15.234 fio-3.35 00:22:15.234 Starting 4 threads 00:22:16.636 00:22:16.636 job0: (groupid=0, jobs=1): err= 0: pid=1987326: Mon Jul 22 10:38:21 2024 00:22:16.636 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:22:16.636 slat (nsec): min=867, max=19624k, avg=127427.96, stdev=908620.53 00:22:16.636 clat (usec): min=4627, max=63440, avg=14298.39, stdev=8690.24 00:22:16.636 lat (usec): min=4636, max=63448, avg=14425.82, stdev=8797.44 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 6849], 5.00th=[ 7767], 10.00th=[ 8848], 20.00th=[ 9372], 00:22:16.636 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11207], 00:22:16.636 | 70.00th=[13829], 80.00th=[19006], 90.00th=[26084], 95.00th=[32637], 00:22:16.636 | 99.00th=[47973], 99.50th=[55313], 99.90th=[63177], 99.95th=[63177], 00:22:16.636 | 99.99th=[63701] 00:22:16.636 write: IOPS=3418, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1006msec); 0 zone resets 00:22:16.636 slat (nsec): min=1630, max=18929k, avg=172707.23, stdev=858271.57 00:22:16.636 clat (usec): min=3339, max=82525, avg=24282.50, stdev=20134.22 00:22:16.636 lat (usec): min=3347, max=82538, avg=24455.21, stdev=20269.45 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 4146], 5.00th=[ 6128], 10.00th=[ 7504], 20.00th=[ 9503], 00:22:16.636 | 30.00th=[13042], 40.00th=[13960], 50.00th=[14746], 60.00th=[19530], 00:22:16.636 | 70.00th=[28181], 80.00th=[35914], 90.00th=[67634], 95.00th=[73925], 00:22:16.636 | 99.00th=[78119], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:22:16.636 | 99.99th=[82314] 00:22:16.636 bw ( KiB/s): min=10112, max=16384, per=13.43%, avg=13248.00, stdev=4434.97, samples=2 00:22:16.636 iops : min= 2528, max= 4096, avg=3312.00, stdev=1108.74, samples=2 00:22:16.636 lat (msec) : 4=0.12%, 10=31.65%, 20=40.09%, 50=21.39%, 100=6.74% 00:22:16.636 cpu : usr=2.09%, sys=2.99%, ctx=405, majf=0, minf=1 00:22:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.636 issued rwts: total=3072,3439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.636 job1: (groupid=0, jobs=1): err= 0: pid=1987327: Mon Jul 22 10:38:21 2024 00:22:16.636 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:22:16.636 slat (nsec): min=872, max=10137k, avg=73084.02, stdev=551472.11 00:22:16.636 clat (usec): min=2469, max=20100, avg=9709.63, stdev=2405.26 00:22:16.636 lat (usec): min=2508, max=20115, avg=9782.71, stdev=2439.33 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 3261], 5.00th=[ 6652], 10.00th=[ 7504], 20.00th=[ 8455], 00:22:16.636 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:22:16.636 | 70.00th=[10028], 80.00th=[10814], 90.00th=[12911], 95.00th=[14746], 00:22:16.636 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19268], 99.95th=[19792], 00:22:16.636 | 99.99th=[20055] 00:22:16.636 write: IOPS=7162, BW=28.0MiB/s (29.3MB/s)(28.1MiB/1005msec); 0 zone resets 00:22:16.636 slat (nsec): min=1521, max=8179.1k, avg=58229.68, stdev=418117.66 00:22:16.636 clat (usec): min=921, max=18038, avg=8036.40, stdev=2420.98 00:22:16.636 lat (usec): min=952, max=18048, avg=8094.63, stdev=2445.77 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 1385], 5.00th=[ 3752], 10.00th=[ 5014], 20.00th=[ 5800], 00:22:16.636 | 30.00th=[ 6915], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:16.636 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10814], 95.00th=[11994], 00:22:16.636 | 99.00th=[13042], 99.50th=[13698], 99.90th=[17695], 99.95th=[17957], 00:22:16.636 | 99.99th=[17957] 00:22:16.636 bw ( KiB/s): min=28672, max=28672, per=29.06%, avg=28672.00, stdev= 0.00, samples=2 00:22:16.636 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:22:16.636 lat (usec) : 1000=0.01% 00:22:16.636 lat (msec) : 2=1.01%, 4=3.20%, 10=74.37%, 20=21.40%, 50=0.01% 00:22:16.636 cpu : usr=4.98%, sys=7.67%, ctx=540, majf=0, minf=1 00:22:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.636 issued rwts: total=7168,7198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.636 job2: (groupid=0, jobs=1): err= 0: pid=1987328: Mon Jul 22 10:38:21 2024 00:22:16.636 read: IOPS=8294, BW=32.4MiB/s (34.0MB/s)(32.5MiB/1003msec) 00:22:16.636 slat (nsec): min=875, max=8613.8k, avg=57838.14, stdev=476731.01 00:22:16.636 clat (usec): min=1720, max=17696, avg=8186.59, stdev=2146.66 00:22:16.636 lat (usec): min=1729, max=17721, avg=8244.43, stdev=2172.78 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 2507], 5.00th=[ 4047], 10.00th=[ 5997], 20.00th=[ 6783], 00:22:16.636 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8455], 00:22:16.636 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[11076], 95.00th=[12125], 00:22:16.636 | 99.00th=[13960], 99.50th=[14615], 99.90th=[15401], 99.95th=[16057], 00:22:16.636 | 99.99th=[17695] 00:22:16.636 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:22:16.636 slat (nsec): min=1545, max=6998.7k, avg=48294.96, stdev=352846.67 00:22:16.636 clat (usec): min=774, max=16064, avg=6799.33, stdev=2345.85 00:22:16.636 lat (usec): min=782, max=16066, avg=6847.63, stdev=2358.13 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 1336], 5.00th=[ 2540], 10.00th=[ 3851], 20.00th=[ 4817], 00:22:16.636 | 30.00th=[ 5538], 40.00th=[ 6456], 50.00th=[ 7242], 60.00th=[ 7504], 00:22:16.636 | 70.00th=[ 7767], 80.00th=[ 8291], 90.00th=[ 9634], 95.00th=[10945], 00:22:16.636 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14615], 99.95th=[15008], 00:22:16.636 | 99.99th=[16057] 00:22:16.636 bw ( KiB/s): min=33712, max=35920, per=35.28%, avg=34816.00, stdev=1561.29, samples=2 00:22:16.636 iops : min= 8428, max= 8980, avg=8704.00, stdev=390.32, samples=2 00:22:16.636 lat (usec) : 1000=0.14% 00:22:16.636 lat (msec) : 2=1.69%, 4=5.94%, 10=81.24%, 20=10.99% 00:22:16.636 cpu : usr=5.29%, sys=9.58%, ctx=582, majf=0, minf=1 00:22:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.636 issued rwts: total=8319,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.636 job3: (groupid=0, jobs=1): err= 0: pid=1987329: Mon Jul 22 10:38:21 2024 00:22:16.636 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:22:16.636 slat (nsec): min=972, max=9162.1k, avg=83342.25, stdev=569563.48 00:22:16.636 clat (usec): min=4201, max=28665, avg=10394.33, stdev=3404.19 00:22:16.636 lat (usec): min=4207, max=28668, avg=10477.67, stdev=3442.80 00:22:16.636 clat percentiles (usec): 00:22:16.636 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 7570], 20.00th=[ 8029], 00:22:16.636 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10421], 00:22:16.636 | 70.00th=[11207], 80.00th=[12256], 90.00th=[14615], 95.00th=[16581], 00:22:16.636 | 99.00th=[23462], 99.50th=[26608], 99.90th=[27919], 99.95th=[28705], 00:22:16.636 | 99.99th=[28705] 00:22:16.636 write: IOPS=5500, BW=21.5MiB/s (22.5MB/s)(21.7MiB/1009msec); 0 zone resets 00:22:16.636 slat (nsec): min=1683, max=9377.2k, avg=97770.63, stdev=516321.63 00:22:16.636 clat (usec): min=2670, max=62299, avg=13398.51, stdev=9215.74 00:22:16.636 lat (usec): min=2679, max=62303, avg=13496.28, stdev=9277.04 00:22:16.637 clat percentiles (usec): 00:22:16.637 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6849], 00:22:16.637 | 30.00th=[ 8029], 40.00th=[ 8979], 50.00th=[11207], 60.00th=[13566], 00:22:16.637 | 70.00th=[14615], 80.00th=[18482], 90.00th=[21890], 95.00th=[25035], 00:22:16.637 | 99.00th=[56886], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:22:16.637 | 99.99th=[62129] 00:22:16.637 bw ( KiB/s): min=19120, max=24264, per=21.98%, avg=21692.00, stdev=3637.36, samples=2 00:22:16.637 iops : min= 4780, max= 6066, avg=5423.00, stdev=909.34, samples=2 00:22:16.637 lat (msec) : 4=0.52%, 10=47.58%, 20=42.76%, 50=8.04%, 100=1.11% 00:22:16.637 cpu : usr=4.56%, sys=5.06%, ctx=505, majf=0, minf=1 00:22:16.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:16.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.637 issued rwts: total=5120,5550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.637 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.637 00:22:16.637 Run status group 0 (all jobs): 00:22:16.637 READ: bw=91.7MiB/s (96.1MB/s), 11.9MiB/s-32.4MiB/s (12.5MB/s-34.0MB/s), io=92.5MiB (97.0MB), run=1003-1009msec 00:22:16.637 WRITE: bw=96.4MiB/s (101MB/s), 13.4MiB/s-33.9MiB/s (14.0MB/s-35.5MB/s), io=97.2MiB (102MB), run=1003-1009msec 00:22:16.637 00:22:16.637 Disk stats (read/write): 00:22:16.637 nvme0n1: ios=2286/2560, merge=0/0, ticks=16474/35360, in_queue=51834, util=87.37% 00:22:16.637 nvme0n2: ios=5928/6144, merge=0/0, ticks=53428/46401, in_queue=99829, util=97.25% 00:22:16.637 nvme0n3: ios=7024/7168, merge=0/0, ticks=52134/45751, in_queue=97885, util=88.41% 00:22:16.637 nvme0n4: ios=4463/4608, merge=0/0, ticks=44155/58452, in_queue=102607, util=97.12% 00:22:16.637 10:38:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:16.637 [global] 00:22:16.637 thread=1 00:22:16.637 invalidate=1 00:22:16.637 rw=randwrite 00:22:16.637 time_based=1 00:22:16.637 runtime=1 00:22:16.637 ioengine=libaio 00:22:16.637 direct=1 00:22:16.637 bs=4096 00:22:16.637 iodepth=128 00:22:16.637 norandommap=0 00:22:16.637 numjobs=1 00:22:16.637 00:22:16.637 verify_dump=1 00:22:16.637 verify_backlog=512 00:22:16.637 verify_state_save=0 00:22:16.637 do_verify=1 00:22:16.637 verify=crc32c-intel 00:22:16.637 [job0] 00:22:16.637 filename=/dev/nvme0n1 00:22:16.637 [job1] 00:22:16.637 filename=/dev/nvme0n2 00:22:16.637 [job2] 00:22:16.637 filename=/dev/nvme0n3 00:22:16.637 [job3] 00:22:16.637 filename=/dev/nvme0n4 00:22:16.637 Could not set queue depth (nvme0n1) 00:22:16.637 Could not set queue depth (nvme0n2) 00:22:16.637 Could not set queue depth (nvme0n3) 00:22:16.637 Could not set queue depth (nvme0n4) 00:22:16.905 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:16.905 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:16.905 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:16.905 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:16.905 fio-3.35 00:22:16.905 Starting 4 threads 00:22:18.319 00:22:18.319 job0: (groupid=0, jobs=1): err= 0: pid=1987784: Mon Jul 22 10:38:23 2024 00:22:18.319 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:22:18.319 slat (nsec): min=880, max=22120k, avg=144703.09, stdev=965190.61 00:22:18.319 clat (usec): min=4739, max=52433, avg=18609.17, stdev=10115.93 00:22:18.319 lat (usec): min=4742, max=52461, avg=18753.88, stdev=10205.98 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 4817], 5.00th=[ 7898], 10.00th=[ 8094], 20.00th=[ 9241], 00:22:18.319 | 30.00th=[ 9765], 40.00th=[11731], 50.00th=[17171], 60.00th=[20317], 00:22:18.319 | 70.00th=[24773], 80.00th=[26870], 90.00th=[33162], 95.00th=[36963], 00:22:18.319 | 99.00th=[44827], 99.50th=[44827], 99.90th=[45351], 99.95th=[52167], 00:22:18.319 | 99.99th=[52691] 00:22:18.319 write: IOPS=3549, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1004msec); 0 zone resets 00:22:18.319 slat (nsec): min=1445, max=14933k, avg=151887.67, stdev=793445.25 00:22:18.319 clat (usec): min=833, max=64181, avg=19639.29, stdev=10726.98 00:22:18.319 lat (usec): min=3947, max=64190, avg=19791.17, stdev=10785.16 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 4424], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[12125], 00:22:18.319 | 30.00th=[13042], 40.00th=[13960], 50.00th=[17171], 60.00th=[19006], 00:22:18.319 | 70.00th=[22676], 80.00th=[27919], 90.00th=[34866], 95.00th=[40633], 00:22:18.319 | 99.00th=[59507], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:22:18.319 | 99.99th=[64226] 00:22:18.319 bw ( KiB/s): min=12288, max=15200, per=14.32%, avg=13744.00, stdev=2059.09, samples=2 00:22:18.319 iops : min= 3072, max= 3800, avg=3436.00, stdev=514.77, samples=2 00:22:18.319 lat (usec) : 1000=0.02% 00:22:18.319 lat (msec) : 4=0.17%, 10=22.51%, 20=38.64%, 50=37.45%, 100=1.22% 00:22:18.319 cpu : usr=1.69%, sys=3.79%, ctx=369, majf=0, minf=1 00:22:18.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:18.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.319 issued rwts: total=3072,3564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.319 job1: (groupid=0, jobs=1): err= 0: pid=1987802: Mon Jul 22 10:38:23 2024 00:22:18.319 read: IOPS=7405, BW=28.9MiB/s (30.3MB/s)(29.0MiB/1002msec) 00:22:18.319 slat (nsec): min=912, max=8102.1k, avg=63981.19, stdev=444515.93 00:22:18.319 clat (usec): min=1279, max=32229, avg=8052.50, stdev=2901.01 00:22:18.319 lat (usec): min=2849, max=32231, avg=8116.48, stdev=2931.74 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 3654], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6390], 00:22:18.319 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 8029], 00:22:18.319 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10814], 95.00th=[12256], 00:22:18.319 | 99.00th=[19006], 99.50th=[28443], 99.90th=[31589], 99.95th=[32113], 00:22:18.319 | 99.99th=[32113] 00:22:18.319 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:22:18.319 slat (nsec): min=1595, max=8109.1k, avg=63489.89, stdev=365692.34 00:22:18.319 clat (usec): min=2089, max=64937, avg=8753.53, stdev=7378.13 00:22:18.319 lat (usec): min=2096, max=64947, avg=8817.02, stdev=7423.89 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 2671], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 6063], 00:22:18.319 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:22:18.319 | 70.00th=[ 7308], 80.00th=[ 9503], 90.00th=[14484], 95.00th=[19268], 00:22:18.319 | 99.00th=[52691], 99.50th=[60556], 99.90th=[64750], 99.95th=[64750], 00:22:18.319 | 99.99th=[64750] 00:22:18.319 bw ( KiB/s): min=24576, max=36864, per=32.00%, avg=30720.00, stdev=8688.93, samples=2 00:22:18.319 iops : min= 6144, max= 9216, avg=7680.00, stdev=2172.23, samples=2 00:22:18.319 lat (msec) : 2=0.01%, 4=3.39%, 10=81.38%, 20=12.42%, 50=2.22% 00:22:18.319 lat (msec) : 100=0.58% 00:22:18.319 cpu : usr=5.79%, sys=6.69%, ctx=841, majf=0, minf=1 00:22:18.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.319 issued rwts: total=7420,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.319 job2: (groupid=0, jobs=1): err= 0: pid=1987824: Mon Jul 22 10:38:23 2024 00:22:18.319 read: IOPS=4578, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:22:18.319 slat (nsec): min=886, max=10779k, avg=109086.25, stdev=735487.58 00:22:18.319 clat (usec): min=1269, max=33012, avg=13731.41, stdev=3726.35 00:22:18.319 lat (usec): min=6777, max=33037, avg=13840.50, stdev=3791.65 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:22:18.319 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12387], 60.00th=[14877], 00:22:18.319 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19006], 95.00th=[20579], 00:22:18.319 | 99.00th=[22938], 99.50th=[23725], 99.90th=[26870], 99.95th=[27395], 00:22:18.319 | 99.99th=[32900] 00:22:18.319 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:22:18.319 slat (nsec): min=1474, max=8084.0k, avg=104409.49, stdev=555324.98 00:22:18.319 clat (usec): min=5312, max=35257, avg=13882.84, stdev=5683.12 00:22:18.319 lat (usec): min=5315, max=36046, avg=13987.25, stdev=5731.32 00:22:18.319 clat percentiles (usec): 00:22:18.319 | 1.00th=[ 6718], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10421], 00:22:18.320 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11338], 60.00th=[13042], 00:22:18.320 | 70.00th=[13960], 80.00th=[16581], 90.00th=[22938], 95.00th=[25822], 00:22:18.320 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:22:18.320 | 99.99th=[35390] 00:22:18.320 bw ( KiB/s): min=12960, max=23904, per=19.20%, avg=18432.00, stdev=7738.58, samples=2 00:22:18.320 iops : min= 3240, max= 5976, avg=4608.00, stdev=1934.64, samples=2 00:22:18.320 lat (msec) : 2=0.01%, 10=12.48%, 20=76.49%, 50=11.02% 00:22:18.320 cpu : usr=2.59%, sys=3.79%, ctx=484, majf=0, minf=1 00:22:18.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:18.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.320 issued rwts: total=4597,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.320 job3: (groupid=0, jobs=1): err= 0: pid=1987831: Mon Jul 22 10:38:23 2024 00:22:18.320 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:22:18.320 slat (nsec): min=909, max=10957k, avg=62322.10, stdev=481353.93 00:22:18.320 clat (usec): min=2878, max=18036, avg=8399.61, stdev=2168.69 00:22:18.320 lat (usec): min=2953, max=18059, avg=8461.93, stdev=2192.20 00:22:18.320 clat percentiles (usec): 00:22:18.320 | 1.00th=[ 3818], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 6980], 00:22:18.320 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8225], 00:22:18.320 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[13173], 00:22:18.320 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15533], 99.95th=[16057], 00:22:18.320 | 99.99th=[17957] 00:22:18.320 write: IOPS=8227, BW=32.1MiB/s (33.7MB/s)(32.2MiB/1002msec); 0 zone resets 00:22:18.320 slat (nsec): min=1613, max=6509.3k, avg=49082.04, stdev=323994.08 00:22:18.320 clat (usec): min=793, max=40797, avg=7065.71, stdev=2941.18 00:22:18.320 lat (usec): min=809, max=40807, avg=7114.79, stdev=2950.23 00:22:18.320 clat percentiles (usec): 00:22:18.320 | 1.00th=[ 1958], 5.00th=[ 3621], 10.00th=[ 4359], 20.00th=[ 5014], 00:22:18.320 | 30.00th=[ 5997], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 7635], 00:22:18.320 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[10028], 00:22:18.320 | 99.00th=[13829], 99.50th=[27132], 99.90th=[39584], 99.95th=[40633], 00:22:18.320 | 99.99th=[40633] 00:22:18.320 bw ( KiB/s): min=32768, max=32816, per=34.16%, avg=32792.00, stdev=33.94, samples=2 00:22:18.320 iops : min= 8192, max= 8204, avg=8198.00, stdev= 8.49, samples=2 00:22:18.320 lat (usec) : 1000=0.02% 00:22:18.320 lat (msec) : 2=0.55%, 4=3.92%, 10=83.57%, 20=11.56%, 50=0.38% 00:22:18.320 cpu : usr=5.19%, sys=8.09%, ctx=673, majf=0, minf=1 00:22:18.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.320 issued rwts: total=8192,8244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.320 00:22:18.320 Run status group 0 (all jobs): 00:22:18.320 READ: bw=90.6MiB/s (95.0MB/s), 12.0MiB/s-31.9MiB/s (12.5MB/s-33.5MB/s), io=90.9MiB (95.4MB), run=1002-1004msec 00:22:18.320 WRITE: bw=93.8MiB/s (98.3MB/s), 13.9MiB/s-32.1MiB/s (14.5MB/s-33.7MB/s), io=94.1MiB (98.7MB), run=1002-1004msec 00:22:18.320 00:22:18.320 Disk stats (read/write): 00:22:18.320 nvme0n1: ios=2610/2629, merge=0/0, ticks=20313/23917, in_queue=44230, util=96.69% 00:22:18.320 nvme0n2: ios=5982/6144, merge=0/0, ticks=46290/55783, in_queue=102073, util=98.88% 00:22:18.320 nvme0n3: ios=3834/4096, merge=0/0, ticks=25037/26325, in_queue=51362, util=88.51% 00:22:18.320 nvme0n4: ios=6678/7088, merge=0/0, ticks=54447/47743, in_queue=102190, util=97.33% 00:22:18.320 10:38:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:22:18.320 10:38:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1987895 00:22:18.320 10:38:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:22:18.320 10:38:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:18.320 [global] 00:22:18.320 thread=1 00:22:18.320 invalidate=1 00:22:18.320 rw=read 00:22:18.320 time_based=1 00:22:18.320 runtime=10 00:22:18.320 ioengine=libaio 00:22:18.320 direct=1 00:22:18.320 bs=4096 00:22:18.320 iodepth=1 00:22:18.320 norandommap=1 00:22:18.320 numjobs=1 00:22:18.320 00:22:18.320 [job0] 00:22:18.320 filename=/dev/nvme0n1 00:22:18.320 [job1] 00:22:18.320 filename=/dev/nvme0n2 00:22:18.320 [job2] 00:22:18.320 filename=/dev/nvme0n3 00:22:18.320 [job3] 00:22:18.320 filename=/dev/nvme0n4 00:22:18.320 Could not set queue depth (nvme0n1) 00:22:18.320 Could not set queue depth (nvme0n2) 00:22:18.320 Could not set queue depth (nvme0n3) 00:22:18.320 Could not set queue depth (nvme0n4) 00:22:18.584 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:18.584 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:18.584 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:18.584 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:18.584 fio-3.35 00:22:18.584 Starting 4 threads 00:22:21.125 10:38:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:21.125 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=11939840, buflen=4096 00:22:21.125 fio: pid=1988272, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:21.386 10:38:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:21.386 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:22:21.386 fio: pid=1988266, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:21.386 10:38:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:21.386 10:38:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:21.647 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12333056, buflen=4096 00:22:21.647 fio: pid=1988232, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:21.647 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:21.647 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:21.647 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19734528, buflen=4096 00:22:21.647 fio: pid=1988248, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:21.647 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:21.647 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:21.647 00:22:21.647 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1988232: Mon Jul 22 10:38:27 2024 00:22:21.647 read: IOPS=1044, BW=4178KiB/s (4278kB/s)(11.8MiB/2883msec) 00:22:21.647 slat (usec): min=5, max=11609, avg=30.27, stdev=262.42 00:22:21.647 clat (usec): min=145, max=42005, avg=920.98, stdev=2356.95 00:22:21.647 lat (usec): min=151, max=42030, avg=951.25, stdev=2372.53 00:22:21.647 clat percentiles (usec): 00:22:21.647 | 1.00th=[ 225], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 424], 00:22:21.647 | 30.00th=[ 469], 40.00th=[ 570], 50.00th=[ 775], 60.00th=[ 881], 00:22:21.647 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:22:21.647 | 99.00th=[ 1336], 99.50th=[ 1401], 99.90th=[42206], 99.95th=[42206], 00:22:21.647 | 99.99th=[42206] 00:22:21.647 bw ( KiB/s): min= 2392, max= 8568, per=30.58%, avg=4344.00, stdev=2434.61, samples=5 00:22:21.647 iops : min= 598, max= 2142, avg=1086.00, stdev=608.65, samples=5 00:22:21.647 lat (usec) : 250=1.89%, 500=33.50%, 750=13.71%, 1000=12.68% 00:22:21.647 lat (msec) : 2=37.78%, 4=0.03%, 10=0.03%, 50=0.33% 00:22:21.647 cpu : usr=1.42%, sys=3.19%, ctx=3016, majf=0, minf=1 00:22:21.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.647 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.647 issued rwts: total=3012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:21.647 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1988248: Mon Jul 22 10:38:27 2024 00:22:21.647 read: IOPS=1583, BW=6331KiB/s (6483kB/s)(18.8MiB/3044msec) 00:22:21.647 slat (usec): min=5, max=13589, avg=31.94, stdev=294.76 00:22:21.647 clat (usec): min=150, max=1484, avg=593.81, stdev=258.54 00:22:21.647 lat (usec): min=167, max=14237, avg=625.75, stdev=393.03 00:22:21.647 clat percentiles (usec): 00:22:21.647 | 1.00th=[ 237], 5.00th=[ 289], 10.00th=[ 347], 20.00th=[ 408], 00:22:21.647 | 30.00th=[ 449], 40.00th=[ 474], 50.00th=[ 510], 60.00th=[ 570], 00:22:21.647 | 70.00th=[ 635], 80.00th=[ 742], 90.00th=[ 1123], 95.00th=[ 1205], 00:22:21.647 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1401], 00:22:21.647 | 99.99th=[ 1483] 00:22:21.647 bw ( KiB/s): min= 3280, max= 8160, per=43.75%, avg=6214.40, stdev=2190.46, samples=5 00:22:21.647 iops : min= 820, max= 2040, avg=1553.60, stdev=547.62, samples=5 00:22:21.647 lat (usec) : 250=2.01%, 500=45.65%, 750=32.62%, 1000=8.57% 00:22:21.647 lat (msec) : 2=11.12% 00:22:21.647 cpu : usr=2.07%, sys=5.75%, ctx=4825, majf=0, minf=1 00:22:21.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.647 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.647 issued rwts: total=4819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:21.647 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1988266: Mon Jul 22 10:38:27 2024 00:22:21.648 read: IOPS=24, BW=97.2KiB/s (99.5kB/s)(264KiB/2716msec) 00:22:21.648 slat (nsec): min=25996, max=40936, avg=26758.16, stdev=1791.05 00:22:21.648 clat (usec): min=902, max=42884, avg=41085.42, stdev=5046.88 00:22:21.648 lat (usec): min=943, max=42911, avg=41112.17, stdev=5045.12 00:22:21.648 clat percentiles (usec): 00:22:21.648 | 1.00th=[ 906], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:21.648 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:22:21.648 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:21.648 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:22:21.648 | 99.99th=[42730] 00:22:21.648 bw ( KiB/s): min= 96, max= 96, per=0.68%, avg=96.00, stdev= 0.00, samples=5 00:22:21.648 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:22:21.648 lat (usec) : 1000=1.49% 00:22:21.648 lat (msec) : 50=97.01% 00:22:21.648 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=1 00:22:21.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.648 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.648 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:21.648 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1988272: Mon Jul 22 10:38:27 2024 00:22:21.648 read: IOPS=1144, BW=4578KiB/s (4688kB/s)(11.4MiB/2547msec) 00:22:21.648 slat (nsec): min=6263, max=59854, avg=22851.98, stdev=7112.70 00:22:21.648 clat (usec): min=318, max=1327, avg=844.84, stdev=126.22 00:22:21.648 lat (usec): min=329, max=1351, avg=867.69, stdev=128.07 00:22:21.648 clat percentiles (usec): 00:22:21.648 | 1.00th=[ 553], 5.00th=[ 660], 10.00th=[ 701], 20.00th=[ 758], 00:22:21.648 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 832], 60.00th=[ 857], 00:22:21.648 | 70.00th=[ 881], 80.00th=[ 922], 90.00th=[ 1020], 95.00th=[ 1090], 00:22:21.648 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1319], 00:22:21.648 | 99.99th=[ 1336] 00:22:21.648 bw ( KiB/s): min= 4176, max= 4808, per=32.39%, avg=4601.60, stdev=277.09, samples=5 00:22:21.648 iops : min= 1044, max= 1202, avg=1150.40, stdev=69.27, samples=5 00:22:21.648 lat (usec) : 500=0.45%, 750=17.73%, 1000=70.06% 00:22:21.648 lat (msec) : 2=11.73% 00:22:21.648 cpu : usr=1.53%, sys=2.83%, ctx=2916, majf=0, minf=2 00:22:21.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.648 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.648 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:21.648 00:22:21.648 Run status group 0 (all jobs): 00:22:21.648 READ: bw=13.9MiB/s (14.5MB/s), 97.2KiB/s-6331KiB/s (99.5kB/s-6483kB/s), io=42.2MiB (44.3MB), run=2547-3044msec 00:22:21.648 00:22:21.648 Disk stats (read/write): 00:22:21.648 nvme0n1: ios=2980/0, merge=0/0, ticks=2531/0, in_queue=2531, util=94.19% 00:22:21.648 nvme0n2: ios=4517/0, merge=0/0, ticks=2334/0, in_queue=2334, util=95.33% 00:22:21.648 nvme0n3: ios=100/0, merge=0/0, ticks=3396/0, in_queue=3396, util=99.56% 00:22:21.648 nvme0n4: ios=2723/0, merge=0/0, ticks=2186/0, in_queue=2186, util=96.02% 00:22:21.908 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:21.908 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:22.168 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:22.168 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:22.168 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:22.168 10:38:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:22.428 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:22.428 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1987895 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:22.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:22.688 nvmf hotplug test: fio failed as expected 00:22:22.688 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.949 rmmod nvme_tcp 00:22:22.949 rmmod nvme_fabrics 00:22:22.949 rmmod nvme_keyring 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1984511 ']' 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1984511 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1984511 ']' 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1984511 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1984511 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1984511' 00:22:22.949 killing process with pid 1984511 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1984511 00:22:22.949 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1984511 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.209 10:38:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.119 10:38:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.119 00:22:25.119 real 0m29.042s 00:22:25.119 user 2m43.647s 00:22:25.119 sys 0m9.800s 00:22:25.119 10:38:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.119 10:38:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.119 ************************************ 00:22:25.119 END TEST nvmf_fio_target 00:22:25.119 ************************************ 00:22:25.119 10:38:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:25.119 10:38:30 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:25.119 10:38:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:25.119 10:38:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.120 10:38:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.384 ************************************ 00:22:25.384 START TEST nvmf_bdevio 00:22:25.384 ************************************ 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:25.384 * Looking for test storage... 00:22:25.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.384 10:38:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:33.597 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.597 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:33.598 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:33.598 Found net devices under 0000:31:00.0: cvl_0_0 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:33.598 Found net devices under 0000:31:00.1: cvl_0_1 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.598 10:38:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:22:33.598 00:22:33.598 --- 10.0.0.2 ping statistics --- 00:22:33.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.598 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:22:33.598 00:22:33.598 --- 10.0.0.1 ping statistics --- 00:22:33.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.598 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1993752 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1993752 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1993752 ']' 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.598 10:38:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:33.598 [2024-07-22 10:38:39.243587] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:22:33.598 [2024-07-22 10:38:39.243650] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.598 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.860 [2024-07-22 10:38:39.340265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.860 [2024-07-22 10:38:39.389636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.860 [2024-07-22 10:38:39.389696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.860 [2024-07-22 10:38:39.389704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.860 [2024-07-22 10:38:39.389710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.860 [2024-07-22 10:38:39.389716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.860 [2024-07-22 10:38:39.389890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.860 [2024-07-22 10:38:39.390051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:33.860 [2024-07-22 10:38:39.390213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.860 [2024-07-22 10:38:39.390213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.433 [2024-07-22 10:38:40.103073] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.433 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.695 Malloc0 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:34.695 [2024-07-22 10:38:40.168238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.695 { 00:22:34.695 "params": { 00:22:34.695 "name": "Nvme$subsystem", 00:22:34.695 "trtype": "$TEST_TRANSPORT", 00:22:34.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.695 "adrfam": "ipv4", 00:22:34.695 "trsvcid": "$NVMF_PORT", 00:22:34.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.695 "hdgst": ${hdgst:-false}, 00:22:34.695 "ddgst": ${ddgst:-false} 00:22:34.695 }, 00:22:34.695 "method": "bdev_nvme_attach_controller" 00:22:34.695 } 00:22:34.695 EOF 00:22:34.695 )") 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:34.695 10:38:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:34.695 "params": { 00:22:34.695 "name": "Nvme1", 00:22:34.695 "trtype": "tcp", 00:22:34.695 "traddr": "10.0.0.2", 00:22:34.695 "adrfam": "ipv4", 00:22:34.695 "trsvcid": "4420", 00:22:34.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.695 "hdgst": false, 00:22:34.695 "ddgst": false 00:22:34.695 }, 00:22:34.695 "method": "bdev_nvme_attach_controller" 00:22:34.695 }' 00:22:34.695 [2024-07-22 10:38:40.234764] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:22:34.695 [2024-07-22 10:38:40.234844] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994017 ] 00:22:34.695 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.695 [2024-07-22 10:38:40.307541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:34.695 [2024-07-22 10:38:40.348068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.695 [2024-07-22 10:38:40.348190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.695 [2024-07-22 10:38:40.348194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.957 I/O targets: 00:22:34.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:34.957 00:22:34.957 00:22:34.957 CUnit - A unit testing framework for C - Version 2.1-3 00:22:34.957 http://cunit.sourceforge.net/ 00:22:34.957 00:22:34.957 00:22:34.957 Suite: bdevio tests on: Nvme1n1 00:22:34.957 Test: blockdev write read block ...passed 00:22:34.957 Test: blockdev write zeroes read block ...passed 00:22:34.957 Test: blockdev write zeroes read no split ...passed 00:22:34.957 Test: blockdev write zeroes read split ...passed 00:22:35.218 Test: blockdev write zeroes read split partial ...passed 00:22:35.218 Test: blockdev reset ...[2024-07-22 10:38:40.657903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.218 [2024-07-22 10:38:40.657973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2e90 (9): Bad file descriptor 00:22:35.218 [2024-07-22 10:38:40.677830] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.218 passed 00:22:35.218 Test: blockdev write read 8 blocks ...passed 00:22:35.218 Test: blockdev write read size > 128k ...passed 00:22:35.218 Test: blockdev write read invalid size ...passed 00:22:35.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:35.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:35.218 Test: blockdev write read max offset ...passed 00:22:35.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:35.218 Test: blockdev writev readv 8 blocks ...passed 00:22:35.218 Test: blockdev writev readv 30 x 1block ...passed 00:22:35.218 Test: blockdev writev readv block ...passed 00:22:35.218 Test: blockdev writev readv size > 128k ...passed 00:22:35.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:35.218 Test: blockdev comparev and writev ...[2024-07-22 10:38:40.860418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.860443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.860453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.860459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.860936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.860944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.860953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.860959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.861458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.861467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.861481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.861975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.861983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:35.218 [2024-07-22 10:38:40.861992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.218 [2024-07-22 10:38:40.861997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:35.218 passed 00:22:35.480 Test: blockdev nvme passthru rw ...passed 00:22:35.480 Test: blockdev nvme passthru vendor specific ...[2024-07-22 10:38:40.947299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.480 [2024-07-22 10:38:40.947310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:35.480 [2024-07-22 10:38:40.947660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.480 [2024-07-22 10:38:40.947670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:35.480 [2024-07-22 10:38:40.948021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.480 [2024-07-22 10:38:40.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:35.480 [2024-07-22 10:38:40.948382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.480 [2024-07-22 10:38:40.948399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:35.480 passed 00:22:35.480 Test: blockdev nvme admin passthru ...passed 00:22:35.480 Test: blockdev copy ...passed 00:22:35.480 00:22:35.480 Run Summary: Type Total Ran Passed Failed Inactive 00:22:35.480 suites 1 1 n/a 0 0 00:22:35.480 tests 23 23 23 0 0 00:22:35.480 asserts 152 152 152 0 n/a 00:22:35.480 00:22:35.480 Elapsed time = 1.032 seconds 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.480 rmmod nvme_tcp 00:22:35.480 rmmod nvme_fabrics 00:22:35.480 rmmod nvme_keyring 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1993752 ']' 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1993752 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1993752 ']' 00:22:35.480 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1993752 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1993752 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1993752' 00:22:35.742 killing process with pid 1993752 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1993752 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1993752 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.742 10:38:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.285 10:38:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.285 00:22:38.285 real 0m12.595s 00:22:38.285 user 0m11.936s 00:22:38.285 sys 0m6.646s 00:22:38.285 10:38:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.285 10:38:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:38.285 ************************************ 00:22:38.285 END TEST nvmf_bdevio 00:22:38.285 ************************************ 00:22:38.285 10:38:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:38.285 10:38:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:38.285 10:38:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.285 10:38:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.285 10:38:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.285 ************************************ 00:22:38.285 START TEST nvmf_auth_target 00:22:38.285 ************************************ 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:38.285 * Looking for test storage... 00:22:38.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:38.285 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.286 10:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.429 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:46.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:46.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:46.430 Found net devices under 0000:31:00.0: cvl_0_0 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:46.430 Found net devices under 0000:31:00.1: cvl_0_1 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:46.430 00:22:46.430 --- 10.0.0.2 ping statistics --- 00:22:46.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.430 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:46.430 00:22:46.430 --- 10.0.0.1 ping statistics --- 00:22:46.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.430 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1998790 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1998790 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1998790 ']' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.430 10:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1999133 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8482cfc797d7c3bb2210caf51c12a2238cefd9b9c4928c92 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DgX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8482cfc797d7c3bb2210caf51c12a2238cefd9b9c4928c92 0 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8482cfc797d7c3bb2210caf51c12a2238cefd9b9c4928c92 0 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8482cfc797d7c3bb2210caf51c12a2238cefd9b9c4928c92 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DgX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DgX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DgX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ff011f4f84c965619a909d1649a8a7d5ba971adb5d1f7ba7792d8624ec8009b 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6Wv 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ff011f4f84c965619a909d1649a8a7d5ba971adb5d1f7ba7792d8624ec8009b 3 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ff011f4f84c965619a909d1649a8a7d5ba971adb5d1f7ba7792d8624ec8009b 3 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ff011f4f84c965619a909d1649a8a7d5ba971adb5d1f7ba7792d8624ec8009b 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6Wv 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6Wv 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.6Wv 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7efe1813a30bfe82897368a588c545a7 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Tm8 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7efe1813a30bfe82897368a588c545a7 1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7efe1813a30bfe82897368a588c545a7 1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7efe1813a30bfe82897368a588c545a7 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Tm8 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Tm8 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Tm8 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.372 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.373 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:47.373 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:47.373 10:38:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3cfd05ffaef6214e258c492c68f6426d9047382b7fff76fd 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JEp 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3cfd05ffaef6214e258c492c68f6426d9047382b7fff76fd 2 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3cfd05ffaef6214e258c492c68f6426d9047382b7fff76fd 2 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3cfd05ffaef6214e258c492c68f6426d9047382b7fff76fd 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JEp 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JEp 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JEp 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1640dff16e035d461df56b9a7b0bc3680b0008cc303bd962 00:22:47.373 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Gov 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1640dff16e035d461df56b9a7b0bc3680b0008cc303bd962 2 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1640dff16e035d461df56b9a7b0bc3680b0008cc303bd962 2 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1640dff16e035d461df56b9a7b0bc3680b0008cc303bd962 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Gov 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Gov 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Gov 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d694532ba9ca73aa3c091e1461c6af9 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L7Q 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d694532ba9ca73aa3c091e1461c6af9 1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d694532ba9ca73aa3c091e1461c6af9 1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d694532ba9ca73aa3c091e1461c6af9 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L7Q 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L7Q 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.L7Q 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2acf0c4ff17d63ab0f9836a4dea0e73f2f8348d770abda2397a1a0ce3033dd52 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0ax 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2acf0c4ff17d63ab0f9836a4dea0e73f2f8348d770abda2397a1a0ce3033dd52 3 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2acf0c4ff17d63ab0f9836a4dea0e73f2f8348d770abda2397a1a0ce3033dd52 3 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2acf0c4ff17d63ab0f9836a4dea0e73f2f8348d770abda2397a1a0ce3033dd52 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0ax 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0ax 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.0ax 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1998790 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1998790 ']' 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.634 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1999133 /var/tmp/host.sock 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1999133 ']' 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:47.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DgX 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DgX 00:22:47.914 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DgX 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.6Wv ]] 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Wv 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Wv 00:22:48.175 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Wv 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Tm8 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Tm8 00:22:48.435 10:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Tm8 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JEp ]] 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JEp 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JEp 00:22:48.435 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JEp 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Gov 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Gov 00:22:48.695 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Gov 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.L7Q ]] 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L7Q 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L7Q 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L7Q 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0ax 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0ax 00:22:48.956 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0ax 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.216 10:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.476 00:22:49.476 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.476 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.476 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.736 { 00:22:49.736 "cntlid": 1, 00:22:49.736 "qid": 0, 00:22:49.736 "state": "enabled", 00:22:49.736 "thread": "nvmf_tgt_poll_group_000", 00:22:49.736 "listen_address": { 00:22:49.736 "trtype": "TCP", 00:22:49.736 "adrfam": "IPv4", 00:22:49.736 "traddr": "10.0.0.2", 00:22:49.736 "trsvcid": "4420" 00:22:49.736 }, 00:22:49.736 "peer_address": { 00:22:49.736 "trtype": "TCP", 00:22:49.736 "adrfam": "IPv4", 00:22:49.736 "traddr": "10.0.0.1", 00:22:49.736 "trsvcid": "50940" 00:22:49.736 }, 00:22:49.736 "auth": { 00:22:49.736 "state": "completed", 00:22:49.736 "digest": "sha256", 00:22:49.736 "dhgroup": "null" 00:22:49.736 } 00:22:49.736 } 00:22:49.736 ]' 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.736 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.996 10:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:50.613 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:50.872 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:50.872 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.872 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.873 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.132 00:22:51.132 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.132 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.132 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.392 { 00:22:51.392 "cntlid": 3, 00:22:51.392 "qid": 0, 00:22:51.392 "state": "enabled", 00:22:51.392 "thread": "nvmf_tgt_poll_group_000", 00:22:51.392 "listen_address": { 00:22:51.392 "trtype": "TCP", 00:22:51.392 "adrfam": "IPv4", 00:22:51.392 "traddr": "10.0.0.2", 00:22:51.392 "trsvcid": "4420" 00:22:51.392 }, 00:22:51.392 "peer_address": { 00:22:51.392 "trtype": "TCP", 00:22:51.392 "adrfam": "IPv4", 00:22:51.392 "traddr": "10.0.0.1", 00:22:51.392 "trsvcid": "50966" 00:22:51.392 }, 00:22:51.392 "auth": { 00:22:51.392 "state": "completed", 00:22:51.392 "digest": "sha256", 00:22:51.392 "dhgroup": "null" 00:22:51.392 } 00:22:51.392 } 00:22:51.392 ]' 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:51.392 10:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.392 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.392 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.392 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.652 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:52.223 10:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.483 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.743 00:22:52.743 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.743 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.743 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.003 { 00:22:53.003 "cntlid": 5, 00:22:53.003 "qid": 0, 00:22:53.003 "state": "enabled", 00:22:53.003 "thread": "nvmf_tgt_poll_group_000", 00:22:53.003 "listen_address": { 00:22:53.003 "trtype": "TCP", 00:22:53.003 "adrfam": "IPv4", 00:22:53.003 "traddr": "10.0.0.2", 00:22:53.003 "trsvcid": "4420" 00:22:53.003 }, 00:22:53.003 "peer_address": { 00:22:53.003 "trtype": "TCP", 00:22:53.003 "adrfam": "IPv4", 00:22:53.003 "traddr": "10.0.0.1", 00:22:53.003 "trsvcid": "50986" 00:22:53.003 }, 00:22:53.003 "auth": { 00:22:53.003 "state": "completed", 00:22:53.003 "digest": "sha256", 00:22:53.003 "dhgroup": "null" 00:22:53.003 } 00:22:53.003 } 00:22:53.003 ]' 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.003 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.263 10:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:53.834 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.095 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.355 00:22:54.356 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.356 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.356 10:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.617 { 00:22:54.617 "cntlid": 7, 00:22:54.617 "qid": 0, 00:22:54.617 "state": "enabled", 00:22:54.617 "thread": "nvmf_tgt_poll_group_000", 00:22:54.617 "listen_address": { 00:22:54.617 "trtype": "TCP", 00:22:54.617 "adrfam": "IPv4", 00:22:54.617 "traddr": "10.0.0.2", 00:22:54.617 "trsvcid": "4420" 00:22:54.617 }, 00:22:54.617 "peer_address": { 00:22:54.617 "trtype": "TCP", 00:22:54.617 "adrfam": "IPv4", 00:22:54.617 "traddr": "10.0.0.1", 00:22:54.617 "trsvcid": "51002" 00:22:54.617 }, 00:22:54.617 "auth": { 00:22:54.617 "state": "completed", 00:22:54.617 "digest": "sha256", 00:22:54.617 "dhgroup": "null" 00:22:54.617 } 00:22:54.617 } 00:22:54.617 ]' 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.617 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.878 10:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:55.449 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.709 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.970 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.970 { 00:22:55.970 "cntlid": 9, 00:22:55.970 "qid": 0, 00:22:55.970 "state": "enabled", 00:22:55.970 "thread": "nvmf_tgt_poll_group_000", 00:22:55.970 "listen_address": { 00:22:55.970 "trtype": "TCP", 00:22:55.970 "adrfam": "IPv4", 00:22:55.970 "traddr": "10.0.0.2", 00:22:55.970 "trsvcid": "4420" 00:22:55.970 }, 00:22:55.970 "peer_address": { 00:22:55.970 "trtype": "TCP", 00:22:55.970 "adrfam": "IPv4", 00:22:55.970 "traddr": "10.0.0.1", 00:22:55.970 "trsvcid": "51040" 00:22:55.970 }, 00:22:55.970 "auth": { 00:22:55.970 "state": "completed", 00:22:55.970 "digest": "sha256", 00:22:55.970 "dhgroup": "ffdhe2048" 00:22:55.970 } 00:22:55.970 } 00:22:55.970 ]' 00:22:55.970 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.231 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.491 10:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:57.065 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.325 10:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.598 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.598 { 00:22:57.598 "cntlid": 11, 00:22:57.598 "qid": 0, 00:22:57.598 "state": "enabled", 00:22:57.598 "thread": "nvmf_tgt_poll_group_000", 00:22:57.598 "listen_address": { 00:22:57.598 "trtype": "TCP", 00:22:57.598 "adrfam": "IPv4", 00:22:57.598 "traddr": "10.0.0.2", 00:22:57.598 "trsvcid": "4420" 00:22:57.598 }, 00:22:57.598 "peer_address": { 00:22:57.598 "trtype": "TCP", 00:22:57.598 "adrfam": "IPv4", 00:22:57.598 "traddr": "10.0.0.1", 00:22:57.598 "trsvcid": "55508" 00:22:57.598 }, 00:22:57.598 "auth": { 00:22:57.598 "state": "completed", 00:22:57.598 "digest": "sha256", 00:22:57.598 "dhgroup": "ffdhe2048" 00:22:57.598 } 00:22:57.598 } 00:22:57.598 ]' 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.598 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.858 10:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.798 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.059 00:22:59.059 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.059 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.059 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.319 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.319 { 00:22:59.319 "cntlid": 13, 00:22:59.319 "qid": 0, 00:22:59.319 "state": "enabled", 00:22:59.319 "thread": "nvmf_tgt_poll_group_000", 00:22:59.320 "listen_address": { 00:22:59.320 "trtype": "TCP", 00:22:59.320 "adrfam": "IPv4", 00:22:59.320 "traddr": "10.0.0.2", 00:22:59.320 "trsvcid": "4420" 00:22:59.320 }, 00:22:59.320 "peer_address": { 00:22:59.320 "trtype": "TCP", 00:22:59.320 "adrfam": "IPv4", 00:22:59.320 "traddr": "10.0.0.1", 00:22:59.320 "trsvcid": "55534" 00:22:59.320 }, 00:22:59.320 "auth": { 00:22:59.320 "state": "completed", 00:22:59.320 "digest": "sha256", 00:22:59.320 "dhgroup": "ffdhe2048" 00:22:59.320 } 00:22:59.320 } 00:22:59.320 ]' 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.320 10:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.580 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.153 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.415 10:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.675 00:23:00.675 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.675 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.675 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.936 { 00:23:00.936 "cntlid": 15, 00:23:00.936 "qid": 0, 00:23:00.936 "state": "enabled", 00:23:00.936 "thread": "nvmf_tgt_poll_group_000", 00:23:00.936 "listen_address": { 00:23:00.936 "trtype": "TCP", 00:23:00.936 "adrfam": "IPv4", 00:23:00.936 "traddr": "10.0.0.2", 00:23:00.936 "trsvcid": "4420" 00:23:00.936 }, 00:23:00.936 "peer_address": { 00:23:00.936 "trtype": "TCP", 00:23:00.936 "adrfam": "IPv4", 00:23:00.936 "traddr": "10.0.0.1", 00:23:00.936 "trsvcid": "55562" 00:23:00.936 }, 00:23:00.936 "auth": { 00:23:00.936 "state": "completed", 00:23:00.936 "digest": "sha256", 00:23:00.936 "dhgroup": "ffdhe2048" 00:23:00.936 } 00:23:00.936 } 00:23:00.936 ]' 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.936 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.196 10:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.769 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.030 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.291 00:23:02.291 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.291 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.291 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.291 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.552 10:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.552 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.552 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.552 10:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.552 { 00:23:02.552 "cntlid": 17, 00:23:02.552 "qid": 0, 00:23:02.552 "state": "enabled", 00:23:02.552 "thread": "nvmf_tgt_poll_group_000", 00:23:02.552 "listen_address": { 00:23:02.552 "trtype": "TCP", 00:23:02.552 "adrfam": "IPv4", 00:23:02.552 "traddr": "10.0.0.2", 00:23:02.552 "trsvcid": "4420" 00:23:02.552 }, 00:23:02.552 "peer_address": { 00:23:02.552 "trtype": "TCP", 00:23:02.552 "adrfam": "IPv4", 00:23:02.552 "traddr": "10.0.0.1", 00:23:02.552 "trsvcid": "55580" 00:23:02.552 }, 00:23:02.552 "auth": { 00:23:02.552 "state": "completed", 00:23:02.552 "digest": "sha256", 00:23:02.552 "dhgroup": "ffdhe3072" 00:23:02.552 } 00:23:02.552 } 00:23:02.552 ]' 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.552 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.812 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.383 10:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.644 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.905 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.905 { 00:23:03.905 "cntlid": 19, 00:23:03.905 "qid": 0, 00:23:03.905 "state": "enabled", 00:23:03.905 "thread": "nvmf_tgt_poll_group_000", 00:23:03.905 "listen_address": { 00:23:03.905 "trtype": "TCP", 00:23:03.905 "adrfam": "IPv4", 00:23:03.905 "traddr": "10.0.0.2", 00:23:03.905 "trsvcid": "4420" 00:23:03.905 }, 00:23:03.905 "peer_address": { 00:23:03.905 "trtype": "TCP", 00:23:03.905 "adrfam": "IPv4", 00:23:03.905 "traddr": "10.0.0.1", 00:23:03.905 "trsvcid": "55602" 00:23:03.905 }, 00:23:03.905 "auth": { 00:23:03.905 "state": "completed", 00:23:03.905 "digest": "sha256", 00:23:03.905 "dhgroup": "ffdhe3072" 00:23:03.905 } 00:23:03.905 } 00:23:03.905 ]' 00:23:03.905 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.165 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.424 10:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.994 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.253 10:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.513 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.513 { 00:23:05.513 "cntlid": 21, 00:23:05.513 "qid": 0, 00:23:05.513 "state": "enabled", 00:23:05.513 "thread": "nvmf_tgt_poll_group_000", 00:23:05.513 "listen_address": { 00:23:05.513 "trtype": "TCP", 00:23:05.513 "adrfam": "IPv4", 00:23:05.513 "traddr": "10.0.0.2", 00:23:05.513 "trsvcid": "4420" 00:23:05.513 }, 00:23:05.513 "peer_address": { 00:23:05.513 "trtype": "TCP", 00:23:05.513 "adrfam": "IPv4", 00:23:05.513 "traddr": "10.0.0.1", 00:23:05.513 "trsvcid": "55636" 00:23:05.513 }, 00:23:05.513 "auth": { 00:23:05.513 "state": "completed", 00:23:05.513 "digest": "sha256", 00:23:05.513 "dhgroup": "ffdhe3072" 00:23:05.513 } 00:23:05.513 } 00:23:05.513 ]' 00:23:05.513 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.782 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.061 10:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.720 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.981 00:23:06.981 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.981 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.981 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.240 { 00:23:07.240 "cntlid": 23, 00:23:07.240 "qid": 0, 00:23:07.240 "state": "enabled", 00:23:07.240 "thread": "nvmf_tgt_poll_group_000", 00:23:07.240 "listen_address": { 00:23:07.240 "trtype": "TCP", 00:23:07.240 "adrfam": "IPv4", 00:23:07.240 "traddr": "10.0.0.2", 00:23:07.240 "trsvcid": "4420" 00:23:07.240 }, 00:23:07.240 "peer_address": { 00:23:07.240 "trtype": "TCP", 00:23:07.240 "adrfam": "IPv4", 00:23:07.240 "traddr": "10.0.0.1", 00:23:07.240 "trsvcid": "55674" 00:23:07.240 }, 00:23:07.240 "auth": { 00:23:07.240 "state": "completed", 00:23:07.240 "digest": "sha256", 00:23:07.240 "dhgroup": "ffdhe3072" 00:23:07.240 } 00:23:07.240 } 00:23:07.240 ]' 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.240 10:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.501 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.441 10:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.701 00:23:08.701 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.701 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.701 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.961 { 00:23:08.961 "cntlid": 25, 00:23:08.961 "qid": 0, 00:23:08.961 "state": "enabled", 00:23:08.961 "thread": "nvmf_tgt_poll_group_000", 00:23:08.961 "listen_address": { 00:23:08.961 "trtype": "TCP", 00:23:08.961 "adrfam": "IPv4", 00:23:08.961 "traddr": "10.0.0.2", 00:23:08.961 "trsvcid": "4420" 00:23:08.961 }, 00:23:08.961 "peer_address": { 00:23:08.961 "trtype": "TCP", 00:23:08.961 "adrfam": "IPv4", 00:23:08.961 "traddr": "10.0.0.1", 00:23:08.961 "trsvcid": "48262" 00:23:08.961 }, 00:23:08.961 "auth": { 00:23:08.961 "state": "completed", 00:23:08.961 "digest": "sha256", 00:23:08.961 "dhgroup": "ffdhe4096" 00:23:08.961 } 00:23:08.961 } 00:23:08.961 ]' 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.961 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.222 10:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.793 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.053 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.313 00:23:10.313 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.313 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.313 10:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.573 { 00:23:10.573 "cntlid": 27, 00:23:10.573 "qid": 0, 00:23:10.573 "state": "enabled", 00:23:10.573 "thread": "nvmf_tgt_poll_group_000", 00:23:10.573 "listen_address": { 00:23:10.573 "trtype": "TCP", 00:23:10.573 "adrfam": "IPv4", 00:23:10.573 "traddr": "10.0.0.2", 00:23:10.573 "trsvcid": "4420" 00:23:10.573 }, 00:23:10.573 "peer_address": { 00:23:10.573 "trtype": "TCP", 00:23:10.573 "adrfam": "IPv4", 00:23:10.573 "traddr": "10.0.0.1", 00:23:10.573 "trsvcid": "48298" 00:23:10.573 }, 00:23:10.573 "auth": { 00:23:10.573 "state": "completed", 00:23:10.573 "digest": "sha256", 00:23:10.573 "dhgroup": "ffdhe4096" 00:23:10.573 } 00:23:10.573 } 00:23:10.573 ]' 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.573 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.832 10:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:11.402 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:11.661 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.662 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.921 00:23:11.921 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.921 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.921 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.180 { 00:23:12.180 "cntlid": 29, 00:23:12.180 "qid": 0, 00:23:12.180 "state": "enabled", 00:23:12.180 "thread": "nvmf_tgt_poll_group_000", 00:23:12.180 "listen_address": { 00:23:12.180 "trtype": "TCP", 00:23:12.180 "adrfam": "IPv4", 00:23:12.180 "traddr": "10.0.0.2", 00:23:12.180 "trsvcid": "4420" 00:23:12.180 }, 00:23:12.180 "peer_address": { 00:23:12.180 "trtype": "TCP", 00:23:12.180 "adrfam": "IPv4", 00:23:12.180 "traddr": "10.0.0.1", 00:23:12.180 "trsvcid": "48318" 00:23:12.180 }, 00:23:12.180 "auth": { 00:23:12.180 "state": "completed", 00:23:12.180 "digest": "sha256", 00:23:12.180 "dhgroup": "ffdhe4096" 00:23:12.180 } 00:23:12.180 } 00:23:12.180 ]' 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.180 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.440 10:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.010 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.269 10:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.529 00:23:13.529 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.529 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.529 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.788 { 00:23:13.788 "cntlid": 31, 00:23:13.788 "qid": 0, 00:23:13.788 "state": "enabled", 00:23:13.788 "thread": "nvmf_tgt_poll_group_000", 00:23:13.788 "listen_address": { 00:23:13.788 "trtype": "TCP", 00:23:13.788 "adrfam": "IPv4", 00:23:13.788 "traddr": "10.0.0.2", 00:23:13.788 "trsvcid": "4420" 00:23:13.788 }, 00:23:13.788 "peer_address": { 00:23:13.788 "trtype": "TCP", 00:23:13.788 "adrfam": "IPv4", 00:23:13.788 "traddr": "10.0.0.1", 00:23:13.788 "trsvcid": "48338" 00:23:13.788 }, 00:23:13.788 "auth": { 00:23:13.788 "state": "completed", 00:23:13.788 "digest": "sha256", 00:23:13.788 "dhgroup": "ffdhe4096" 00:23:13.788 } 00:23:13.788 } 00:23:13.788 ]' 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.788 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.048 10:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:14.617 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.617 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.877 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.446 00:23:15.446 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.446 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.446 10:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.446 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.446 { 00:23:15.446 "cntlid": 33, 00:23:15.446 "qid": 0, 00:23:15.446 "state": "enabled", 00:23:15.446 "thread": "nvmf_tgt_poll_group_000", 00:23:15.446 "listen_address": { 00:23:15.446 "trtype": "TCP", 00:23:15.446 "adrfam": "IPv4", 00:23:15.446 "traddr": "10.0.0.2", 00:23:15.446 "trsvcid": "4420" 00:23:15.446 }, 00:23:15.446 "peer_address": { 00:23:15.446 "trtype": "TCP", 00:23:15.446 "adrfam": "IPv4", 00:23:15.447 "traddr": "10.0.0.1", 00:23:15.447 "trsvcid": "48362" 00:23:15.447 }, 00:23:15.447 "auth": { 00:23:15.447 "state": "completed", 00:23:15.447 "digest": "sha256", 00:23:15.447 "dhgroup": "ffdhe6144" 00:23:15.447 } 00:23:15.447 } 00:23:15.447 ]' 00:23:15.447 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.447 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:15.447 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.447 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:15.447 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.706 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.706 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.706 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.706 10:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.644 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.903 00:23:16.903 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.903 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.903 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.163 { 00:23:17.163 "cntlid": 35, 00:23:17.163 "qid": 0, 00:23:17.163 "state": "enabled", 00:23:17.163 "thread": "nvmf_tgt_poll_group_000", 00:23:17.163 "listen_address": { 00:23:17.163 "trtype": "TCP", 00:23:17.163 "adrfam": "IPv4", 00:23:17.163 "traddr": "10.0.0.2", 00:23:17.163 "trsvcid": "4420" 00:23:17.163 }, 00:23:17.163 "peer_address": { 00:23:17.163 "trtype": "TCP", 00:23:17.163 "adrfam": "IPv4", 00:23:17.163 "traddr": "10.0.0.1", 00:23:17.163 "trsvcid": "48386" 00:23:17.163 }, 00:23:17.163 "auth": { 00:23:17.163 "state": "completed", 00:23:17.163 "digest": "sha256", 00:23:17.163 "dhgroup": "ffdhe6144" 00:23:17.163 } 00:23:17.163 } 00:23:17.163 ]' 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:17.163 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.424 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.424 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.424 10:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.424 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.366 10:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.367 10:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.625 00:23:18.625 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.626 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.626 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.885 { 00:23:18.885 "cntlid": 37, 00:23:18.885 "qid": 0, 00:23:18.885 "state": "enabled", 00:23:18.885 "thread": "nvmf_tgt_poll_group_000", 00:23:18.885 "listen_address": { 00:23:18.885 "trtype": "TCP", 00:23:18.885 "adrfam": "IPv4", 00:23:18.885 "traddr": "10.0.0.2", 00:23:18.885 "trsvcid": "4420" 00:23:18.885 }, 00:23:18.885 "peer_address": { 00:23:18.885 "trtype": "TCP", 00:23:18.885 "adrfam": "IPv4", 00:23:18.885 "traddr": "10.0.0.1", 00:23:18.885 "trsvcid": "58488" 00:23:18.885 }, 00:23:18.885 "auth": { 00:23:18.885 "state": "completed", 00:23:18.885 "digest": "sha256", 00:23:18.885 "dhgroup": "ffdhe6144" 00:23:18.885 } 00:23:18.885 } 00:23:18.885 ]' 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.885 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:19.144 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.144 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.144 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.144 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.144 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.082 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.083 10:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.343 00:23:20.343 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.343 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.343 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.603 { 00:23:20.603 "cntlid": 39, 00:23:20.603 "qid": 0, 00:23:20.603 "state": "enabled", 00:23:20.603 "thread": "nvmf_tgt_poll_group_000", 00:23:20.603 "listen_address": { 00:23:20.603 "trtype": "TCP", 00:23:20.603 "adrfam": "IPv4", 00:23:20.603 "traddr": "10.0.0.2", 00:23:20.603 "trsvcid": "4420" 00:23:20.603 }, 00:23:20.603 "peer_address": { 00:23:20.603 "trtype": "TCP", 00:23:20.603 "adrfam": "IPv4", 00:23:20.603 "traddr": "10.0.0.1", 00:23:20.603 "trsvcid": "58518" 00:23:20.603 }, 00:23:20.603 "auth": { 00:23:20.603 "state": "completed", 00:23:20.603 "digest": "sha256", 00:23:20.603 "dhgroup": "ffdhe6144" 00:23:20.603 } 00:23:20.603 } 00:23:20.603 ]' 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:20.603 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.863 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.863 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.863 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.863 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.801 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.371 00:23:22.371 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.371 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.371 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.371 { 00:23:22.371 "cntlid": 41, 00:23:22.371 "qid": 0, 00:23:22.371 "state": "enabled", 00:23:22.371 "thread": "nvmf_tgt_poll_group_000", 00:23:22.371 "listen_address": { 00:23:22.371 "trtype": "TCP", 00:23:22.371 "adrfam": "IPv4", 00:23:22.371 "traddr": "10.0.0.2", 00:23:22.371 "trsvcid": "4420" 00:23:22.371 }, 00:23:22.371 "peer_address": { 00:23:22.371 "trtype": "TCP", 00:23:22.371 "adrfam": "IPv4", 00:23:22.371 "traddr": "10.0.0.1", 00:23:22.371 "trsvcid": "58552" 00:23:22.371 }, 00:23:22.371 "auth": { 00:23:22.371 "state": "completed", 00:23:22.371 "digest": "sha256", 00:23:22.371 "dhgroup": "ffdhe8192" 00:23:22.371 } 00:23:22.371 } 00:23:22.371 ]' 00:23:22.371 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.632 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.570 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.139 00:23:24.139 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.139 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.139 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.399 { 00:23:24.399 "cntlid": 43, 00:23:24.399 "qid": 0, 00:23:24.399 "state": "enabled", 00:23:24.399 "thread": "nvmf_tgt_poll_group_000", 00:23:24.399 "listen_address": { 00:23:24.399 "trtype": "TCP", 00:23:24.399 "adrfam": "IPv4", 00:23:24.399 "traddr": "10.0.0.2", 00:23:24.399 "trsvcid": "4420" 00:23:24.399 }, 00:23:24.399 "peer_address": { 00:23:24.399 "trtype": "TCP", 00:23:24.399 "adrfam": "IPv4", 00:23:24.399 "traddr": "10.0.0.1", 00:23:24.399 "trsvcid": "58580" 00:23:24.399 }, 00:23:24.399 "auth": { 00:23:24.399 "state": "completed", 00:23:24.399 "digest": "sha256", 00:23:24.399 "dhgroup": "ffdhe8192" 00:23:24.399 } 00:23:24.399 } 00:23:24.399 ]' 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.399 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:24.399 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.399 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.399 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.659 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.229 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.490 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.061 00:23:26.061 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.061 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.061 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.321 { 00:23:26.321 "cntlid": 45, 00:23:26.321 "qid": 0, 00:23:26.321 "state": "enabled", 00:23:26.321 "thread": "nvmf_tgt_poll_group_000", 00:23:26.321 "listen_address": { 00:23:26.321 "trtype": "TCP", 00:23:26.321 "adrfam": "IPv4", 00:23:26.321 "traddr": "10.0.0.2", 00:23:26.321 "trsvcid": "4420" 00:23:26.321 }, 00:23:26.321 "peer_address": { 00:23:26.321 "trtype": "TCP", 00:23:26.321 "adrfam": "IPv4", 00:23:26.321 "traddr": "10.0.0.1", 00:23:26.321 "trsvcid": "58600" 00:23:26.321 }, 00:23:26.321 "auth": { 00:23:26.321 "state": "completed", 00:23:26.321 "digest": "sha256", 00:23:26.321 "dhgroup": "ffdhe8192" 00:23:26.321 } 00:23:26.321 } 00:23:26.321 ]' 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.321 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.582 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:27.152 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.418 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.985 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:27.985 { 00:23:27.985 "cntlid": 47, 00:23:27.985 "qid": 0, 00:23:27.985 "state": "enabled", 00:23:27.985 "thread": "nvmf_tgt_poll_group_000", 00:23:27.985 "listen_address": { 00:23:27.985 "trtype": "TCP", 00:23:27.985 "adrfam": "IPv4", 00:23:27.985 "traddr": "10.0.0.2", 00:23:27.985 "trsvcid": "4420" 00:23:27.985 }, 00:23:27.985 "peer_address": { 00:23:27.985 "trtype": "TCP", 00:23:27.985 "adrfam": "IPv4", 00:23:27.985 "traddr": "10.0.0.1", 00:23:27.985 "trsvcid": "45134" 00:23:27.985 }, 00:23:27.985 "auth": { 00:23:27.985 "state": "completed", 00:23:27.985 "digest": "sha256", 00:23:27.985 "dhgroup": "ffdhe8192" 00:23:27.985 } 00:23:27.985 } 00:23:27.985 ]' 00:23:27.985 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.245 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.504 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.071 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.072 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:29.072 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.330 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.589 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:29.589 { 00:23:29.589 "cntlid": 49, 00:23:29.589 "qid": 0, 00:23:29.589 "state": "enabled", 00:23:29.589 "thread": "nvmf_tgt_poll_group_000", 00:23:29.589 "listen_address": { 00:23:29.589 "trtype": "TCP", 00:23:29.589 "adrfam": "IPv4", 00:23:29.589 "traddr": "10.0.0.2", 00:23:29.589 "trsvcid": "4420" 00:23:29.589 }, 00:23:29.589 "peer_address": { 00:23:29.589 "trtype": "TCP", 00:23:29.589 "adrfam": "IPv4", 00:23:29.589 "traddr": "10.0.0.1", 00:23:29.589 "trsvcid": "45172" 00:23:29.589 }, 00:23:29.589 "auth": { 00:23:29.589 "state": "completed", 00:23:29.589 "digest": "sha384", 00:23:29.589 "dhgroup": "null" 00:23:29.589 } 00:23:29.589 } 00:23:29.589 ]' 00:23:29.589 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.849 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.109 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:30.681 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.942 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.201 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:31.201 { 00:23:31.201 "cntlid": 51, 00:23:31.201 "qid": 0, 00:23:31.201 "state": "enabled", 00:23:31.201 "thread": "nvmf_tgt_poll_group_000", 00:23:31.201 "listen_address": { 00:23:31.201 "trtype": "TCP", 00:23:31.201 "adrfam": "IPv4", 00:23:31.201 "traddr": "10.0.0.2", 00:23:31.201 "trsvcid": "4420" 00:23:31.201 }, 00:23:31.201 "peer_address": { 00:23:31.201 "trtype": "TCP", 00:23:31.201 "adrfam": "IPv4", 00:23:31.201 "traddr": "10.0.0.1", 00:23:31.201 "trsvcid": "45212" 00:23:31.201 }, 00:23:31.201 "auth": { 00:23:31.201 "state": "completed", 00:23:31.201 "digest": "sha384", 00:23:31.201 "dhgroup": "null" 00:23:31.201 } 00:23:31.201 } 00:23:31.201 ]' 00:23:31.201 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:31.461 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.461 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:31.461 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:31.461 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:31.461 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.461 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.461 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.721 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:32.291 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.551 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.552 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.812 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.812 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.812 { 00:23:32.812 "cntlid": 53, 00:23:32.813 "qid": 0, 00:23:32.813 "state": "enabled", 00:23:32.813 "thread": "nvmf_tgt_poll_group_000", 00:23:32.813 "listen_address": { 00:23:32.813 "trtype": "TCP", 00:23:32.813 "adrfam": "IPv4", 00:23:32.813 "traddr": "10.0.0.2", 00:23:32.813 "trsvcid": "4420" 00:23:32.813 }, 00:23:32.813 "peer_address": { 00:23:32.813 "trtype": "TCP", 00:23:32.813 "adrfam": "IPv4", 00:23:32.813 "traddr": "10.0.0.1", 00:23:32.813 "trsvcid": "45252" 00:23:32.813 }, 00:23:32.813 "auth": { 00:23:32.813 "state": "completed", 00:23:32.813 "digest": "sha384", 00:23:32.813 "dhgroup": "null" 00:23:32.813 } 00:23:32.813 } 00:23:32.813 ]' 00:23:32.813 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.073 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:34.014 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:34.274 00:23:34.274 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:34.274 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:34.274 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.553 { 00:23:34.553 "cntlid": 55, 00:23:34.553 "qid": 0, 00:23:34.553 "state": "enabled", 00:23:34.553 "thread": "nvmf_tgt_poll_group_000", 00:23:34.553 "listen_address": { 00:23:34.553 "trtype": "TCP", 00:23:34.553 "adrfam": "IPv4", 00:23:34.553 "traddr": "10.0.0.2", 00:23:34.553 "trsvcid": "4420" 00:23:34.553 }, 00:23:34.553 "peer_address": { 00:23:34.553 "trtype": "TCP", 00:23:34.553 "adrfam": "IPv4", 00:23:34.553 "traddr": "10.0.0.1", 00:23:34.553 "trsvcid": "45288" 00:23:34.553 }, 00:23:34.553 "auth": { 00:23:34.553 "state": "completed", 00:23:34.553 "digest": "sha384", 00:23:34.553 "dhgroup": "null" 00:23:34.553 } 00:23:34.553 } 00:23:34.553 ]' 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.553 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.813 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:35.386 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.679 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.965 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:35.965 { 00:23:35.965 "cntlid": 57, 00:23:35.965 "qid": 0, 00:23:35.965 "state": "enabled", 00:23:35.965 "thread": "nvmf_tgt_poll_group_000", 00:23:35.965 "listen_address": { 00:23:35.965 "trtype": "TCP", 00:23:35.965 "adrfam": "IPv4", 00:23:35.965 "traddr": "10.0.0.2", 00:23:35.965 "trsvcid": "4420" 00:23:35.965 }, 00:23:35.965 "peer_address": { 00:23:35.965 "trtype": "TCP", 00:23:35.965 "adrfam": "IPv4", 00:23:35.965 "traddr": "10.0.0.1", 00:23:35.965 "trsvcid": "45304" 00:23:35.965 }, 00:23:35.965 "auth": { 00:23:35.965 "state": "completed", 00:23:35.965 "digest": "sha384", 00:23:35.965 "dhgroup": "ffdhe2048" 00:23:35.965 } 00:23:35.965 } 00:23:35.965 ]' 00:23:35.965 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.224 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.485 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.073 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.333 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.594 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:37.594 { 00:23:37.594 "cntlid": 59, 00:23:37.594 "qid": 0, 00:23:37.594 "state": "enabled", 00:23:37.594 "thread": "nvmf_tgt_poll_group_000", 00:23:37.594 "listen_address": { 00:23:37.594 "trtype": "TCP", 00:23:37.594 "adrfam": "IPv4", 00:23:37.594 "traddr": "10.0.0.2", 00:23:37.594 "trsvcid": "4420" 00:23:37.594 }, 00:23:37.594 "peer_address": { 00:23:37.594 "trtype": "TCP", 00:23:37.594 "adrfam": "IPv4", 00:23:37.594 "traddr": "10.0.0.1", 00:23:37.594 "trsvcid": "38908" 00:23:37.594 }, 00:23:37.594 "auth": { 00:23:37.594 "state": "completed", 00:23:37.594 "digest": "sha384", 00:23:37.594 "dhgroup": "ffdhe2048" 00:23:37.594 } 00:23:37.594 } 00:23:37.594 ]' 00:23:37.594 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.855 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.116 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.687 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.948 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.207 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.207 { 00:23:39.207 "cntlid": 61, 00:23:39.207 "qid": 0, 00:23:39.207 "state": "enabled", 00:23:39.207 "thread": "nvmf_tgt_poll_group_000", 00:23:39.207 "listen_address": { 00:23:39.207 "trtype": "TCP", 00:23:39.207 "adrfam": "IPv4", 00:23:39.207 "traddr": "10.0.0.2", 00:23:39.207 "trsvcid": "4420" 00:23:39.207 }, 00:23:39.207 "peer_address": { 00:23:39.207 "trtype": "TCP", 00:23:39.207 "adrfam": "IPv4", 00:23:39.207 "traddr": "10.0.0.1", 00:23:39.207 "trsvcid": "38934" 00:23:39.207 }, 00:23:39.207 "auth": { 00:23:39.207 "state": "completed", 00:23:39.207 "digest": "sha384", 00:23:39.207 "dhgroup": "ffdhe2048" 00:23:39.207 } 00:23:39.207 } 00:23:39.207 ]' 00:23:39.207 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:39.466 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:39.466 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:39.466 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:39.466 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.467 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.467 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.467 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.726 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.295 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:40.555 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:40.816 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:40.816 { 00:23:40.816 "cntlid": 63, 00:23:40.816 "qid": 0, 00:23:40.816 "state": "enabled", 00:23:40.816 "thread": "nvmf_tgt_poll_group_000", 00:23:40.816 "listen_address": { 00:23:40.816 "trtype": "TCP", 00:23:40.816 "adrfam": "IPv4", 00:23:40.816 "traddr": "10.0.0.2", 00:23:40.816 "trsvcid": "4420" 00:23:40.816 }, 00:23:40.816 "peer_address": { 00:23:40.816 "trtype": "TCP", 00:23:40.816 "adrfam": "IPv4", 00:23:40.816 "traddr": "10.0.0.1", 00:23:40.816 "trsvcid": "38956" 00:23:40.816 }, 00:23:40.816 "auth": { 00:23:40.816 "state": "completed", 00:23:40.816 "digest": "sha384", 00:23:40.816 "dhgroup": "ffdhe2048" 00:23:40.816 } 00:23:40.816 } 00:23:40.816 ]' 00:23:40.816 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.077 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.337 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:41.908 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.168 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.444 00:23:42.444 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.444 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.444 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.444 { 00:23:42.444 "cntlid": 65, 00:23:42.444 "qid": 0, 00:23:42.444 "state": "enabled", 00:23:42.444 "thread": "nvmf_tgt_poll_group_000", 00:23:42.444 "listen_address": { 00:23:42.444 "trtype": "TCP", 00:23:42.444 "adrfam": "IPv4", 00:23:42.444 "traddr": "10.0.0.2", 00:23:42.444 "trsvcid": "4420" 00:23:42.444 }, 00:23:42.444 "peer_address": { 00:23:42.444 "trtype": "TCP", 00:23:42.444 "adrfam": "IPv4", 00:23:42.444 "traddr": "10.0.0.1", 00:23:42.444 "trsvcid": "38978" 00:23:42.444 }, 00:23:42.444 "auth": { 00:23:42.444 "state": "completed", 00:23:42.444 "digest": "sha384", 00:23:42.444 "dhgroup": "ffdhe3072" 00:23:42.444 } 00:23:42.444 } 00:23:42.444 ]' 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:42.444 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.707 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.644 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.904 00:23:43.904 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:43.904 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:43.904 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.165 { 00:23:44.165 "cntlid": 67, 00:23:44.165 "qid": 0, 00:23:44.165 "state": "enabled", 00:23:44.165 "thread": "nvmf_tgt_poll_group_000", 00:23:44.165 "listen_address": { 00:23:44.165 "trtype": "TCP", 00:23:44.165 "adrfam": "IPv4", 00:23:44.165 "traddr": "10.0.0.2", 00:23:44.165 "trsvcid": "4420" 00:23:44.165 }, 00:23:44.165 "peer_address": { 00:23:44.165 "trtype": "TCP", 00:23:44.165 "adrfam": "IPv4", 00:23:44.165 "traddr": "10.0.0.1", 00:23:44.165 "trsvcid": "39012" 00:23:44.165 }, 00:23:44.165 "auth": { 00:23:44.165 "state": "completed", 00:23:44.165 "digest": "sha384", 00:23:44.165 "dhgroup": "ffdhe3072" 00:23:44.165 } 00:23:44.165 } 00:23:44.165 ]' 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.165 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.426 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:44.994 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.254 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.255 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.255 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.255 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.255 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.514 00:23:45.514 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:45.514 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.514 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:45.774 { 00:23:45.774 "cntlid": 69, 00:23:45.774 "qid": 0, 00:23:45.774 "state": "enabled", 00:23:45.774 "thread": "nvmf_tgt_poll_group_000", 00:23:45.774 "listen_address": { 00:23:45.774 "trtype": "TCP", 00:23:45.774 "adrfam": "IPv4", 00:23:45.774 "traddr": "10.0.0.2", 00:23:45.774 "trsvcid": "4420" 00:23:45.774 }, 00:23:45.774 "peer_address": { 00:23:45.774 "trtype": "TCP", 00:23:45.774 "adrfam": "IPv4", 00:23:45.774 "traddr": "10.0.0.1", 00:23:45.774 "trsvcid": "39034" 00:23:45.774 }, 00:23:45.774 "auth": { 00:23:45.774 "state": "completed", 00:23:45.774 "digest": "sha384", 00:23:45.774 "dhgroup": "ffdhe3072" 00:23:45.774 } 00:23:45.774 } 00:23:45.774 ]' 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.774 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.034 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:46.604 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:46.866 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:47.127 00:23:47.127 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:47.127 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:47.127 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:47.388 { 00:23:47.388 "cntlid": 71, 00:23:47.388 "qid": 0, 00:23:47.388 "state": "enabled", 00:23:47.388 "thread": "nvmf_tgt_poll_group_000", 00:23:47.388 "listen_address": { 00:23:47.388 "trtype": "TCP", 00:23:47.388 "adrfam": "IPv4", 00:23:47.388 "traddr": "10.0.0.2", 00:23:47.388 "trsvcid": "4420" 00:23:47.388 }, 00:23:47.388 "peer_address": { 00:23:47.388 "trtype": "TCP", 00:23:47.388 "adrfam": "IPv4", 00:23:47.388 "traddr": "10.0.0.1", 00:23:47.388 "trsvcid": "49174" 00:23:47.388 }, 00:23:47.388 "auth": { 00:23:47.388 "state": "completed", 00:23:47.388 "digest": "sha384", 00:23:47.388 "dhgroup": "ffdhe3072" 00:23:47.388 } 00:23:47.388 } 00:23:47.388 ]' 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:47.388 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:47.388 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.388 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.388 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.649 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:48.217 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.477 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.736 00:23:48.736 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.736 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.736 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.996 { 00:23:48.996 "cntlid": 73, 00:23:48.996 "qid": 0, 00:23:48.996 "state": "enabled", 00:23:48.996 "thread": "nvmf_tgt_poll_group_000", 00:23:48.996 "listen_address": { 00:23:48.996 "trtype": "TCP", 00:23:48.996 "adrfam": "IPv4", 00:23:48.996 "traddr": "10.0.0.2", 00:23:48.996 "trsvcid": "4420" 00:23:48.996 }, 00:23:48.996 "peer_address": { 00:23:48.996 "trtype": "TCP", 00:23:48.996 "adrfam": "IPv4", 00:23:48.996 "traddr": "10.0.0.1", 00:23:48.996 "trsvcid": "49206" 00:23:48.996 }, 00:23:48.996 "auth": { 00:23:48.996 "state": "completed", 00:23:48.996 "digest": "sha384", 00:23:48.996 "dhgroup": "ffdhe4096" 00:23:48.996 } 00:23:48.996 } 00:23:48.996 ]' 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.996 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.256 10:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:49.825 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.085 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.344 00:23:50.344 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:50.344 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:50.344 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:50.604 { 00:23:50.604 "cntlid": 75, 00:23:50.604 "qid": 0, 00:23:50.604 "state": "enabled", 00:23:50.604 "thread": "nvmf_tgt_poll_group_000", 00:23:50.604 "listen_address": { 00:23:50.604 "trtype": "TCP", 00:23:50.604 "adrfam": "IPv4", 00:23:50.604 "traddr": "10.0.0.2", 00:23:50.604 "trsvcid": "4420" 00:23:50.604 }, 00:23:50.604 "peer_address": { 00:23:50.604 "trtype": "TCP", 00:23:50.604 "adrfam": "IPv4", 00:23:50.604 "traddr": "10.0.0.1", 00:23:50.604 "trsvcid": "49214" 00:23:50.604 }, 00:23:50.604 "auth": { 00:23:50.604 "state": "completed", 00:23:50.604 "digest": "sha384", 00:23:50.604 "dhgroup": "ffdhe4096" 00:23:50.604 } 00:23:50.604 } 00:23:50.604 ]' 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.604 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.864 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:51.433 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.693 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.952 00:23:51.952 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:51.952 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:51.952 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.211 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.211 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.211 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.211 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.212 { 00:23:52.212 "cntlid": 77, 00:23:52.212 "qid": 0, 00:23:52.212 "state": "enabled", 00:23:52.212 "thread": "nvmf_tgt_poll_group_000", 00:23:52.212 "listen_address": { 00:23:52.212 "trtype": "TCP", 00:23:52.212 "adrfam": "IPv4", 00:23:52.212 "traddr": "10.0.0.2", 00:23:52.212 "trsvcid": "4420" 00:23:52.212 }, 00:23:52.212 "peer_address": { 00:23:52.212 "trtype": "TCP", 00:23:52.212 "adrfam": "IPv4", 00:23:52.212 "traddr": "10.0.0.1", 00:23:52.212 "trsvcid": "49238" 00:23:52.212 }, 00:23:52.212 "auth": { 00:23:52.212 "state": "completed", 00:23:52.212 "digest": "sha384", 00:23:52.212 "dhgroup": "ffdhe4096" 00:23:52.212 } 00:23:52.212 } 00:23:52.212 ]' 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.212 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.472 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:53.409 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:53.670 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:53.670 { 00:23:53.670 "cntlid": 79, 00:23:53.670 "qid": 0, 00:23:53.670 "state": "enabled", 00:23:53.670 "thread": "nvmf_tgt_poll_group_000", 00:23:53.670 "listen_address": { 00:23:53.670 "trtype": "TCP", 00:23:53.670 "adrfam": "IPv4", 00:23:53.670 "traddr": "10.0.0.2", 00:23:53.670 "trsvcid": "4420" 00:23:53.670 }, 00:23:53.670 "peer_address": { 00:23:53.670 "trtype": "TCP", 00:23:53.670 "adrfam": "IPv4", 00:23:53.670 "traddr": "10.0.0.1", 00:23:53.670 "trsvcid": "49274" 00:23:53.670 }, 00:23:53.670 "auth": { 00:23:53.670 "state": "completed", 00:23:53.670 "digest": "sha384", 00:23:53.670 "dhgroup": "ffdhe4096" 00:23:53.670 } 00:23:53.670 } 00:23:53.670 ]' 00:23:53.670 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.930 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.191 10:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:54.762 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.022 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.282 00:23:55.282 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:55.282 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:55.282 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:55.541 { 00:23:55.541 "cntlid": 81, 00:23:55.541 "qid": 0, 00:23:55.541 "state": "enabled", 00:23:55.541 "thread": "nvmf_tgt_poll_group_000", 00:23:55.541 "listen_address": { 00:23:55.541 "trtype": "TCP", 00:23:55.541 "adrfam": "IPv4", 00:23:55.541 "traddr": "10.0.0.2", 00:23:55.541 "trsvcid": "4420" 00:23:55.541 }, 00:23:55.541 "peer_address": { 00:23:55.541 "trtype": "TCP", 00:23:55.541 "adrfam": "IPv4", 00:23:55.541 "traddr": "10.0.0.1", 00:23:55.541 "trsvcid": "49298" 00:23:55.541 }, 00:23:55.541 "auth": { 00:23:55.541 "state": "completed", 00:23:55.541 "digest": "sha384", 00:23:55.541 "dhgroup": "ffdhe6144" 00:23:55.541 } 00:23:55.541 } 00:23:55.541 ]' 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.541 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.801 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.740 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.002 00:23:57.002 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.002 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.002 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:57.262 { 00:23:57.262 "cntlid": 83, 00:23:57.262 "qid": 0, 00:23:57.262 "state": "enabled", 00:23:57.262 "thread": "nvmf_tgt_poll_group_000", 00:23:57.262 "listen_address": { 00:23:57.262 "trtype": "TCP", 00:23:57.262 "adrfam": "IPv4", 00:23:57.262 "traddr": "10.0.0.2", 00:23:57.262 "trsvcid": "4420" 00:23:57.262 }, 00:23:57.262 "peer_address": { 00:23:57.262 "trtype": "TCP", 00:23:57.262 "adrfam": "IPv4", 00:23:57.262 "traddr": "10.0.0.1", 00:23:57.262 "trsvcid": "49336" 00:23:57.262 }, 00:23:57.262 "auth": { 00:23:57.262 "state": "completed", 00:23:57.262 "digest": "sha384", 00:23:57.262 "dhgroup": "ffdhe6144" 00:23:57.262 } 00:23:57.262 } 00:23:57.262 ]' 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.262 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.525 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.466 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.467 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.467 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.727 00:23:58.727 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:58.727 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:58.727 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:58.988 { 00:23:58.988 "cntlid": 85, 00:23:58.988 "qid": 0, 00:23:58.988 "state": "enabled", 00:23:58.988 "thread": "nvmf_tgt_poll_group_000", 00:23:58.988 "listen_address": { 00:23:58.988 "trtype": "TCP", 00:23:58.988 "adrfam": "IPv4", 00:23:58.988 "traddr": "10.0.0.2", 00:23:58.988 "trsvcid": "4420" 00:23:58.988 }, 00:23:58.988 "peer_address": { 00:23:58.988 "trtype": "TCP", 00:23:58.988 "adrfam": "IPv4", 00:23:58.988 "traddr": "10.0.0.1", 00:23:58.988 "trsvcid": "47712" 00:23:58.988 }, 00:23:58.988 "auth": { 00:23:58.988 "state": "completed", 00:23:58.988 "digest": "sha384", 00:23:58.988 "dhgroup": "ffdhe6144" 00:23:58.988 } 00:23:58.988 } 00:23:58.988 ]' 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.988 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.249 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:00.192 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:00.453 00:24:00.453 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:00.453 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:00.453 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:00.722 { 00:24:00.722 "cntlid": 87, 00:24:00.722 "qid": 0, 00:24:00.722 "state": "enabled", 00:24:00.722 "thread": "nvmf_tgt_poll_group_000", 00:24:00.722 "listen_address": { 00:24:00.722 "trtype": "TCP", 00:24:00.722 "adrfam": "IPv4", 00:24:00.722 "traddr": "10.0.0.2", 00:24:00.722 "trsvcid": "4420" 00:24:00.722 }, 00:24:00.722 "peer_address": { 00:24:00.722 "trtype": "TCP", 00:24:00.722 "adrfam": "IPv4", 00:24:00.722 "traddr": "10.0.0.1", 00:24:00.722 "trsvcid": "47750" 00:24:00.722 }, 00:24:00.722 "auth": { 00:24:00.722 "state": "completed", 00:24:00.722 "digest": "sha384", 00:24:00.722 "dhgroup": "ffdhe6144" 00:24:00.722 } 00:24:00.722 } 00:24:00.722 ]' 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.722 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.983 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:01.614 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.920 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.504 00:24:02.504 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:02.504 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:02.504 10:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:02.504 { 00:24:02.504 "cntlid": 89, 00:24:02.504 "qid": 0, 00:24:02.504 "state": "enabled", 00:24:02.504 "thread": "nvmf_tgt_poll_group_000", 00:24:02.504 "listen_address": { 00:24:02.504 "trtype": "TCP", 00:24:02.504 "adrfam": "IPv4", 00:24:02.504 "traddr": "10.0.0.2", 00:24:02.504 "trsvcid": "4420" 00:24:02.504 }, 00:24:02.504 "peer_address": { 00:24:02.504 "trtype": "TCP", 00:24:02.504 "adrfam": "IPv4", 00:24:02.504 "traddr": "10.0.0.1", 00:24:02.504 "trsvcid": "47778" 00:24:02.504 }, 00:24:02.504 "auth": { 00:24:02.504 "state": "completed", 00:24:02.504 "digest": "sha384", 00:24:02.504 "dhgroup": "ffdhe8192" 00:24:02.504 } 00:24:02.504 } 00:24:02.504 ]' 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:02.504 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.764 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.703 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.270 00:24:04.270 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:04.270 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:04.270 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.529 { 00:24:04.529 "cntlid": 91, 00:24:04.529 "qid": 0, 00:24:04.529 "state": "enabled", 00:24:04.529 "thread": "nvmf_tgt_poll_group_000", 00:24:04.529 "listen_address": { 00:24:04.529 "trtype": "TCP", 00:24:04.529 "adrfam": "IPv4", 00:24:04.529 "traddr": "10.0.0.2", 00:24:04.529 "trsvcid": "4420" 00:24:04.529 }, 00:24:04.529 "peer_address": { 00:24:04.529 "trtype": "TCP", 00:24:04.529 "adrfam": "IPv4", 00:24:04.529 "traddr": "10.0.0.1", 00:24:04.529 "trsvcid": "47796" 00:24:04.529 }, 00:24:04.529 "auth": { 00:24:04.529 "state": "completed", 00:24:04.529 "digest": "sha384", 00:24:04.529 "dhgroup": "ffdhe8192" 00:24:04.529 } 00:24:04.529 } 00:24:04.529 ]' 00:24:04.529 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.529 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.788 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:05.379 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.379 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:05.379 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.380 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.380 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.380 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:05.380 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.380 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.638 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.207 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.207 { 00:24:06.207 "cntlid": 93, 00:24:06.207 "qid": 0, 00:24:06.207 "state": "enabled", 00:24:06.207 "thread": "nvmf_tgt_poll_group_000", 00:24:06.207 "listen_address": { 00:24:06.207 "trtype": "TCP", 00:24:06.207 "adrfam": "IPv4", 00:24:06.207 "traddr": "10.0.0.2", 00:24:06.207 "trsvcid": "4420" 00:24:06.207 }, 00:24:06.207 "peer_address": { 00:24:06.207 "trtype": "TCP", 00:24:06.207 "adrfam": "IPv4", 00:24:06.207 "traddr": "10.0.0.1", 00:24:06.207 "trsvcid": "47820" 00:24:06.207 }, 00:24:06.207 "auth": { 00:24:06.207 "state": "completed", 00:24:06.207 "digest": "sha384", 00:24:06.207 "dhgroup": "ffdhe8192" 00:24:06.207 } 00:24:06.207 } 00:24:06.207 ]' 00:24:06.207 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.467 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:06.467 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.467 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:06.467 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.467 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.467 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.467 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.727 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.296 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:07.556 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:08.127 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.127 { 00:24:08.127 "cntlid": 95, 00:24:08.127 "qid": 0, 00:24:08.127 "state": "enabled", 00:24:08.127 "thread": "nvmf_tgt_poll_group_000", 00:24:08.127 "listen_address": { 00:24:08.127 "trtype": "TCP", 00:24:08.127 "adrfam": "IPv4", 00:24:08.127 "traddr": "10.0.0.2", 00:24:08.127 "trsvcid": "4420" 00:24:08.127 }, 00:24:08.127 "peer_address": { 00:24:08.127 "trtype": "TCP", 00:24:08.127 "adrfam": "IPv4", 00:24:08.127 "traddr": "10.0.0.1", 00:24:08.127 "trsvcid": "36926" 00:24:08.127 }, 00:24:08.127 "auth": { 00:24:08.127 "state": "completed", 00:24:08.127 "digest": "sha384", 00:24:08.127 "dhgroup": "ffdhe8192" 00:24:08.127 } 00:24:08.127 } 00:24:08.127 ]' 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:08.127 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.388 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:08.388 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.388 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.388 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.388 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.388 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.330 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.331 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.590 00:24:09.590 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:09.590 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:09.590 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:09.851 { 00:24:09.851 "cntlid": 97, 00:24:09.851 "qid": 0, 00:24:09.851 "state": "enabled", 00:24:09.851 "thread": "nvmf_tgt_poll_group_000", 00:24:09.851 "listen_address": { 00:24:09.851 "trtype": "TCP", 00:24:09.851 "adrfam": "IPv4", 00:24:09.851 "traddr": "10.0.0.2", 00:24:09.851 "trsvcid": "4420" 00:24:09.851 }, 00:24:09.851 "peer_address": { 00:24:09.851 "trtype": "TCP", 00:24:09.851 "adrfam": "IPv4", 00:24:09.851 "traddr": "10.0.0.1", 00:24:09.851 "trsvcid": "36952" 00:24:09.851 }, 00:24:09.851 "auth": { 00:24:09.851 "state": "completed", 00:24:09.851 "digest": "sha512", 00:24:09.851 "dhgroup": "null" 00:24:09.851 } 00:24:09.851 } 00:24:09.851 ]' 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.851 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.110 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:10.680 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.939 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.199 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:11.199 { 00:24:11.199 "cntlid": 99, 00:24:11.199 "qid": 0, 00:24:11.199 "state": "enabled", 00:24:11.199 "thread": "nvmf_tgt_poll_group_000", 00:24:11.199 "listen_address": { 00:24:11.199 "trtype": "TCP", 00:24:11.199 "adrfam": "IPv4", 00:24:11.199 "traddr": "10.0.0.2", 00:24:11.199 "trsvcid": "4420" 00:24:11.199 }, 00:24:11.199 "peer_address": { 00:24:11.199 "trtype": "TCP", 00:24:11.199 "adrfam": "IPv4", 00:24:11.199 "traddr": "10.0.0.1", 00:24:11.199 "trsvcid": "36992" 00:24:11.199 }, 00:24:11.199 "auth": { 00:24:11.199 "state": "completed", 00:24:11.199 "digest": "sha512", 00:24:11.199 "dhgroup": "null" 00:24:11.199 } 00:24:11.199 } 00:24:11.199 ]' 00:24:11.199 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:11.459 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.459 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:11.459 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:11.459 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:11.459 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.459 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.459 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.718 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:12.289 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.549 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.809 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:12.809 { 00:24:12.809 "cntlid": 101, 00:24:12.809 "qid": 0, 00:24:12.809 "state": "enabled", 00:24:12.809 "thread": "nvmf_tgt_poll_group_000", 00:24:12.809 "listen_address": { 00:24:12.809 "trtype": "TCP", 00:24:12.809 "adrfam": "IPv4", 00:24:12.809 "traddr": "10.0.0.2", 00:24:12.809 "trsvcid": "4420" 00:24:12.809 }, 00:24:12.809 "peer_address": { 00:24:12.809 "trtype": "TCP", 00:24:12.809 "adrfam": "IPv4", 00:24:12.809 "traddr": "10.0.0.1", 00:24:12.809 "trsvcid": "37026" 00:24:12.809 }, 00:24:12.809 "auth": { 00:24:12.809 "state": "completed", 00:24:12.809 "digest": "sha512", 00:24:12.809 "dhgroup": "null" 00:24:12.809 } 00:24:12.809 } 00:24:12.809 ]' 00:24:12.809 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.069 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.008 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.269 00:24:14.269 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:14.269 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:14.269 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:14.529 { 00:24:14.529 "cntlid": 103, 00:24:14.529 "qid": 0, 00:24:14.529 "state": "enabled", 00:24:14.529 "thread": "nvmf_tgt_poll_group_000", 00:24:14.529 "listen_address": { 00:24:14.529 "trtype": "TCP", 00:24:14.529 "adrfam": "IPv4", 00:24:14.529 "traddr": "10.0.0.2", 00:24:14.529 "trsvcid": "4420" 00:24:14.529 }, 00:24:14.529 "peer_address": { 00:24:14.529 "trtype": "TCP", 00:24:14.529 "adrfam": "IPv4", 00:24:14.529 "traddr": "10.0.0.1", 00:24:14.529 "trsvcid": "37058" 00:24:14.529 }, 00:24:14.529 "auth": { 00:24:14.529 "state": "completed", 00:24:14.529 "digest": "sha512", 00:24:14.529 "dhgroup": "null" 00:24:14.529 } 00:24:14.529 } 00:24:14.529 ]' 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.529 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.788 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:15.359 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.620 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.879 00:24:15.879 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:15.879 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:15.879 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.879 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:16.139 { 00:24:16.139 "cntlid": 105, 00:24:16.139 "qid": 0, 00:24:16.139 "state": "enabled", 00:24:16.139 "thread": "nvmf_tgt_poll_group_000", 00:24:16.139 "listen_address": { 00:24:16.139 "trtype": "TCP", 00:24:16.139 "adrfam": "IPv4", 00:24:16.139 "traddr": "10.0.0.2", 00:24:16.139 "trsvcid": "4420" 00:24:16.139 }, 00:24:16.139 "peer_address": { 00:24:16.139 "trtype": "TCP", 00:24:16.139 "adrfam": "IPv4", 00:24:16.139 "traddr": "10.0.0.1", 00:24:16.139 "trsvcid": "37082" 00:24:16.139 }, 00:24:16.139 "auth": { 00:24:16.139 "state": "completed", 00:24:16.139 "digest": "sha512", 00:24:16.139 "dhgroup": "ffdhe2048" 00:24:16.139 } 00:24:16.139 } 00:24:16.139 ]' 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.139 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.399 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.967 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.228 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.488 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.488 { 00:24:17.488 "cntlid": 107, 00:24:17.488 "qid": 0, 00:24:17.488 "state": "enabled", 00:24:17.488 "thread": "nvmf_tgt_poll_group_000", 00:24:17.488 "listen_address": { 00:24:17.488 "trtype": "TCP", 00:24:17.488 "adrfam": "IPv4", 00:24:17.488 "traddr": "10.0.0.2", 00:24:17.488 "trsvcid": "4420" 00:24:17.488 }, 00:24:17.488 "peer_address": { 00:24:17.488 "trtype": "TCP", 00:24:17.488 "adrfam": "IPv4", 00:24:17.488 "traddr": "10.0.0.1", 00:24:17.488 "trsvcid": "39154" 00:24:17.488 }, 00:24:17.488 "auth": { 00:24:17.488 "state": "completed", 00:24:17.488 "digest": "sha512", 00:24:17.488 "dhgroup": "ffdhe2048" 00:24:17.488 } 00:24:17.488 } 00:24:17.488 ]' 00:24:17.488 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.747 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.007 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:18.576 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.836 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.096 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.096 { 00:24:19.096 "cntlid": 109, 00:24:19.096 "qid": 0, 00:24:19.096 "state": "enabled", 00:24:19.096 "thread": "nvmf_tgt_poll_group_000", 00:24:19.096 "listen_address": { 00:24:19.096 "trtype": "TCP", 00:24:19.096 "adrfam": "IPv4", 00:24:19.096 "traddr": "10.0.0.2", 00:24:19.096 "trsvcid": "4420" 00:24:19.096 }, 00:24:19.096 "peer_address": { 00:24:19.096 "trtype": "TCP", 00:24:19.096 "adrfam": "IPv4", 00:24:19.096 "traddr": "10.0.0.1", 00:24:19.096 "trsvcid": "39188" 00:24:19.096 }, 00:24:19.096 "auth": { 00:24:19.096 "state": "completed", 00:24:19.096 "digest": "sha512", 00:24:19.096 "dhgroup": "ffdhe2048" 00:24:19.096 } 00:24:19.096 } 00:24:19.096 ]' 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.096 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.356 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.356 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:20.296 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:20.556 00:24:20.556 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:20.556 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:20.556 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:20.816 { 00:24:20.816 "cntlid": 111, 00:24:20.816 "qid": 0, 00:24:20.816 "state": "enabled", 00:24:20.816 "thread": "nvmf_tgt_poll_group_000", 00:24:20.816 "listen_address": { 00:24:20.816 "trtype": "TCP", 00:24:20.816 "adrfam": "IPv4", 00:24:20.816 "traddr": "10.0.0.2", 00:24:20.816 "trsvcid": "4420" 00:24:20.816 }, 00:24:20.816 "peer_address": { 00:24:20.816 "trtype": "TCP", 00:24:20.816 "adrfam": "IPv4", 00:24:20.816 "traddr": "10.0.0.1", 00:24:20.816 "trsvcid": "39222" 00:24:20.816 }, 00:24:20.816 "auth": { 00:24:20.816 "state": "completed", 00:24:20.816 "digest": "sha512", 00:24:20.816 "dhgroup": "ffdhe2048" 00:24:20.816 } 00:24:20.816 } 00:24:20.816 ]' 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.816 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.076 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:21.644 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.904 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.164 00:24:22.164 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:22.164 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:22.164 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:22.424 { 00:24:22.424 "cntlid": 113, 00:24:22.424 "qid": 0, 00:24:22.424 "state": "enabled", 00:24:22.424 "thread": "nvmf_tgt_poll_group_000", 00:24:22.424 "listen_address": { 00:24:22.424 "trtype": "TCP", 00:24:22.424 "adrfam": "IPv4", 00:24:22.424 "traddr": "10.0.0.2", 00:24:22.424 "trsvcid": "4420" 00:24:22.424 }, 00:24:22.424 "peer_address": { 00:24:22.424 "trtype": "TCP", 00:24:22.424 "adrfam": "IPv4", 00:24:22.424 "traddr": "10.0.0.1", 00:24:22.424 "trsvcid": "39246" 00:24:22.424 }, 00:24:22.424 "auth": { 00:24:22.424 "state": "completed", 00:24:22.424 "digest": "sha512", 00:24:22.424 "dhgroup": "ffdhe3072" 00:24:22.424 } 00:24:22.424 } 00:24:22.424 ]' 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:22.424 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:22.424 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:22.424 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:22.424 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.424 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.424 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.685 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:23.255 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.515 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.515 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.776 00:24:23.776 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:23.776 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.776 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:24.037 { 00:24:24.037 "cntlid": 115, 00:24:24.037 "qid": 0, 00:24:24.037 "state": "enabled", 00:24:24.037 "thread": "nvmf_tgt_poll_group_000", 00:24:24.037 "listen_address": { 00:24:24.037 "trtype": "TCP", 00:24:24.037 "adrfam": "IPv4", 00:24:24.037 "traddr": "10.0.0.2", 00:24:24.037 "trsvcid": "4420" 00:24:24.037 }, 00:24:24.037 "peer_address": { 00:24:24.037 "trtype": "TCP", 00:24:24.037 "adrfam": "IPv4", 00:24:24.037 "traddr": "10.0.0.1", 00:24:24.037 "trsvcid": "39256" 00:24:24.037 }, 00:24:24.037 "auth": { 00:24:24.037 "state": "completed", 00:24:24.037 "digest": "sha512", 00:24:24.037 "dhgroup": "ffdhe3072" 00:24:24.037 } 00:24:24.037 } 00:24:24.037 ]' 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.037 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.298 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:24.870 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.870 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:24.870 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.870 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:25.129 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.130 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.390 00:24:25.390 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:25.390 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:25.390 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.651 { 00:24:25.651 "cntlid": 117, 00:24:25.651 "qid": 0, 00:24:25.651 "state": "enabled", 00:24:25.651 "thread": "nvmf_tgt_poll_group_000", 00:24:25.651 "listen_address": { 00:24:25.651 "trtype": "TCP", 00:24:25.651 "adrfam": "IPv4", 00:24:25.651 "traddr": "10.0.0.2", 00:24:25.651 "trsvcid": "4420" 00:24:25.651 }, 00:24:25.651 "peer_address": { 00:24:25.651 "trtype": "TCP", 00:24:25.651 "adrfam": "IPv4", 00:24:25.651 "traddr": "10.0.0.1", 00:24:25.651 "trsvcid": "39274" 00:24:25.651 }, 00:24:25.651 "auth": { 00:24:25.651 "state": "completed", 00:24:25.651 "digest": "sha512", 00:24:25.651 "dhgroup": "ffdhe3072" 00:24:25.651 } 00:24:25.651 } 00:24:25.651 ]' 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.651 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.912 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:26.482 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.743 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:27.003 00:24:27.003 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:27.003 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:27.003 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.262 { 00:24:27.262 "cntlid": 119, 00:24:27.262 "qid": 0, 00:24:27.262 "state": "enabled", 00:24:27.262 "thread": "nvmf_tgt_poll_group_000", 00:24:27.262 "listen_address": { 00:24:27.262 "trtype": "TCP", 00:24:27.262 "adrfam": "IPv4", 00:24:27.262 "traddr": "10.0.0.2", 00:24:27.262 "trsvcid": "4420" 00:24:27.262 }, 00:24:27.262 "peer_address": { 00:24:27.262 "trtype": "TCP", 00:24:27.262 "adrfam": "IPv4", 00:24:27.262 "traddr": "10.0.0.1", 00:24:27.262 "trsvcid": "39296" 00:24:27.262 }, 00:24:27.262 "auth": { 00:24:27.262 "state": "completed", 00:24:27.262 "digest": "sha512", 00:24:27.262 "dhgroup": "ffdhe3072" 00:24:27.262 } 00:24:27.262 } 00:24:27.262 ]' 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.262 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.522 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.097 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.357 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.617 00:24:28.617 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:28.617 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:28.617 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:28.937 { 00:24:28.937 "cntlid": 121, 00:24:28.937 "qid": 0, 00:24:28.937 "state": "enabled", 00:24:28.937 "thread": "nvmf_tgt_poll_group_000", 00:24:28.937 "listen_address": { 00:24:28.937 "trtype": "TCP", 00:24:28.937 "adrfam": "IPv4", 00:24:28.937 "traddr": "10.0.0.2", 00:24:28.937 "trsvcid": "4420" 00:24:28.937 }, 00:24:28.937 "peer_address": { 00:24:28.937 "trtype": "TCP", 00:24:28.937 "adrfam": "IPv4", 00:24:28.937 "traddr": "10.0.0.1", 00:24:28.937 "trsvcid": "55566" 00:24:28.937 }, 00:24:28.937 "auth": { 00:24:28.937 "state": "completed", 00:24:28.937 "digest": "sha512", 00:24:28.937 "dhgroup": "ffdhe4096" 00:24:28.937 } 00:24:28.937 } 00:24:28.937 ]' 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.937 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.204 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:29.776 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.037 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.297 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.297 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.558 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.558 { 00:24:30.558 "cntlid": 123, 00:24:30.558 "qid": 0, 00:24:30.558 "state": "enabled", 00:24:30.558 "thread": "nvmf_tgt_poll_group_000", 00:24:30.558 "listen_address": { 00:24:30.558 "trtype": "TCP", 00:24:30.558 "adrfam": "IPv4", 00:24:30.558 "traddr": "10.0.0.2", 00:24:30.558 "trsvcid": "4420" 00:24:30.558 }, 00:24:30.558 "peer_address": { 00:24:30.558 "trtype": "TCP", 00:24:30.558 "adrfam": "IPv4", 00:24:30.558 "traddr": "10.0.0.1", 00:24:30.558 "trsvcid": "55586" 00:24:30.558 }, 00:24:30.558 "auth": { 00:24:30.558 "state": "completed", 00:24:30.558 "digest": "sha512", 00:24:30.558 "dhgroup": "ffdhe4096" 00:24:30.558 } 00:24:30.558 } 00:24:30.558 ]' 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.558 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.818 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:31.387 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.387 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:31.387 10:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.387 10:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.387 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.387 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:31.387 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.387 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.647 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.907 00:24:31.907 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:31.907 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:31.907 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.167 { 00:24:32.167 "cntlid": 125, 00:24:32.167 "qid": 0, 00:24:32.167 "state": "enabled", 00:24:32.167 "thread": "nvmf_tgt_poll_group_000", 00:24:32.167 "listen_address": { 00:24:32.167 "trtype": "TCP", 00:24:32.167 "adrfam": "IPv4", 00:24:32.167 "traddr": "10.0.0.2", 00:24:32.167 "trsvcid": "4420" 00:24:32.167 }, 00:24:32.167 "peer_address": { 00:24:32.167 "trtype": "TCP", 00:24:32.167 "adrfam": "IPv4", 00:24:32.167 "traddr": "10.0.0.1", 00:24:32.167 "trsvcid": "55614" 00:24:32.167 }, 00:24:32.167 "auth": { 00:24:32.167 "state": "completed", 00:24:32.167 "digest": "sha512", 00:24:32.167 "dhgroup": "ffdhe4096" 00:24:32.167 } 00:24:32.167 } 00:24:32.167 ]' 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.167 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.427 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:32.996 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.997 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:33.257 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.258 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.258 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.258 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.258 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.518 00:24:33.518 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:33.518 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:33.518 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.777 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.777 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:33.778 { 00:24:33.778 "cntlid": 127, 00:24:33.778 "qid": 0, 00:24:33.778 "state": "enabled", 00:24:33.778 "thread": "nvmf_tgt_poll_group_000", 00:24:33.778 "listen_address": { 00:24:33.778 "trtype": "TCP", 00:24:33.778 "adrfam": "IPv4", 00:24:33.778 "traddr": "10.0.0.2", 00:24:33.778 "trsvcid": "4420" 00:24:33.778 }, 00:24:33.778 "peer_address": { 00:24:33.778 "trtype": "TCP", 00:24:33.778 "adrfam": "IPv4", 00:24:33.778 "traddr": "10.0.0.1", 00:24:33.778 "trsvcid": "55648" 00:24:33.778 }, 00:24:33.778 "auth": { 00:24:33.778 "state": "completed", 00:24:33.778 "digest": "sha512", 00:24:33.778 "dhgroup": "ffdhe4096" 00:24:33.778 } 00:24:33.778 } 00:24:33.778 ]' 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.778 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.038 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:34.607 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:34.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.608 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.867 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.126 00:24:35.126 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:35.126 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.126 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:35.385 { 00:24:35.385 "cntlid": 129, 00:24:35.385 "qid": 0, 00:24:35.385 "state": "enabled", 00:24:35.385 "thread": "nvmf_tgt_poll_group_000", 00:24:35.385 "listen_address": { 00:24:35.385 "trtype": "TCP", 00:24:35.385 "adrfam": "IPv4", 00:24:35.385 "traddr": "10.0.0.2", 00:24:35.385 "trsvcid": "4420" 00:24:35.385 }, 00:24:35.385 "peer_address": { 00:24:35.385 "trtype": "TCP", 00:24:35.385 "adrfam": "IPv4", 00:24:35.385 "traddr": "10.0.0.1", 00:24:35.385 "trsvcid": "55666" 00:24:35.385 }, 00:24:35.385 "auth": { 00:24:35.385 "state": "completed", 00:24:35.385 "digest": "sha512", 00:24:35.385 "dhgroup": "ffdhe6144" 00:24:35.385 } 00:24:35.385 } 00:24:35.385 ]' 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:35.385 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:35.385 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:35.385 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:35.645 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:35.645 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:35.645 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.645 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.586 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.586 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.846 00:24:36.846 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:36.846 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:36.846 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:37.106 { 00:24:37.106 "cntlid": 131, 00:24:37.106 "qid": 0, 00:24:37.106 "state": "enabled", 00:24:37.106 "thread": "nvmf_tgt_poll_group_000", 00:24:37.106 "listen_address": { 00:24:37.106 "trtype": "TCP", 00:24:37.106 "adrfam": "IPv4", 00:24:37.106 "traddr": "10.0.0.2", 00:24:37.106 "trsvcid": "4420" 00:24:37.106 }, 00:24:37.106 "peer_address": { 00:24:37.106 "trtype": "TCP", 00:24:37.106 "adrfam": "IPv4", 00:24:37.106 "traddr": "10.0.0.1", 00:24:37.106 "trsvcid": "55694" 00:24:37.106 }, 00:24:37.106 "auth": { 00:24:37.106 "state": "completed", 00:24:37.106 "digest": "sha512", 00:24:37.106 "dhgroup": "ffdhe6144" 00:24:37.106 } 00:24:37.106 } 00:24:37.106 ]' 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.106 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.366 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.305 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.565 00:24:38.565 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:38.565 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:38.565 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:38.825 { 00:24:38.825 "cntlid": 133, 00:24:38.825 "qid": 0, 00:24:38.825 "state": "enabled", 00:24:38.825 "thread": "nvmf_tgt_poll_group_000", 00:24:38.825 "listen_address": { 00:24:38.825 "trtype": "TCP", 00:24:38.825 "adrfam": "IPv4", 00:24:38.825 "traddr": "10.0.0.2", 00:24:38.825 "trsvcid": "4420" 00:24:38.825 }, 00:24:38.825 "peer_address": { 00:24:38.825 "trtype": "TCP", 00:24:38.825 "adrfam": "IPv4", 00:24:38.825 "traddr": "10.0.0.1", 00:24:38.825 "trsvcid": "41766" 00:24:38.825 }, 00:24:38.825 "auth": { 00:24:38.825 "state": "completed", 00:24:38.825 "digest": "sha512", 00:24:38.825 "dhgroup": "ffdhe6144" 00:24:38.825 } 00:24:38.825 } 00:24:38.825 ]' 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.825 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.085 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.654 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:39.915 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:40.175 00:24:40.175 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:40.175 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:40.175 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:40.436 { 00:24:40.436 "cntlid": 135, 00:24:40.436 "qid": 0, 00:24:40.436 "state": "enabled", 00:24:40.436 "thread": "nvmf_tgt_poll_group_000", 00:24:40.436 "listen_address": { 00:24:40.436 "trtype": "TCP", 00:24:40.436 "adrfam": "IPv4", 00:24:40.436 "traddr": "10.0.0.2", 00:24:40.436 "trsvcid": "4420" 00:24:40.436 }, 00:24:40.436 "peer_address": { 00:24:40.436 "trtype": "TCP", 00:24:40.436 "adrfam": "IPv4", 00:24:40.436 "traddr": "10.0.0.1", 00:24:40.436 "trsvcid": "41800" 00:24:40.436 }, 00:24:40.436 "auth": { 00:24:40.436 "state": "completed", 00:24:40.436 "digest": "sha512", 00:24:40.436 "dhgroup": "ffdhe6144" 00:24:40.436 } 00:24:40.436 } 00:24:40.436 ]' 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:40.436 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:40.696 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.696 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.696 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.696 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.635 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.206 00:24:42.206 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:42.206 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:42.206 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:42.465 { 00:24:42.465 "cntlid": 137, 00:24:42.465 "qid": 0, 00:24:42.465 "state": "enabled", 00:24:42.465 "thread": "nvmf_tgt_poll_group_000", 00:24:42.465 "listen_address": { 00:24:42.465 "trtype": "TCP", 00:24:42.465 "adrfam": "IPv4", 00:24:42.465 "traddr": "10.0.0.2", 00:24:42.465 "trsvcid": "4420" 00:24:42.465 }, 00:24:42.465 "peer_address": { 00:24:42.465 "trtype": "TCP", 00:24:42.465 "adrfam": "IPv4", 00:24:42.465 "traddr": "10.0.0.1", 00:24:42.465 "trsvcid": "41830" 00:24:42.465 }, 00:24:42.465 "auth": { 00:24:42.465 "state": "completed", 00:24:42.465 "digest": "sha512", 00:24:42.465 "dhgroup": "ffdhe8192" 00:24:42.465 } 00:24:42.465 } 00:24:42.465 ]' 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:42.465 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:42.465 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:42.465 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:42.465 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.465 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.465 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.725 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:43.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.294 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.553 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.122 00:24:44.122 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:44.122 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:44.122 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:44.382 { 00:24:44.382 "cntlid": 139, 00:24:44.382 "qid": 0, 00:24:44.382 "state": "enabled", 00:24:44.382 "thread": "nvmf_tgt_poll_group_000", 00:24:44.382 "listen_address": { 00:24:44.382 "trtype": "TCP", 00:24:44.382 "adrfam": "IPv4", 00:24:44.382 "traddr": "10.0.0.2", 00:24:44.382 "trsvcid": "4420" 00:24:44.382 }, 00:24:44.382 "peer_address": { 00:24:44.382 "trtype": "TCP", 00:24:44.382 "adrfam": "IPv4", 00:24:44.382 "traddr": "10.0.0.1", 00:24:44.382 "trsvcid": "41846" 00:24:44.382 }, 00:24:44.382 "auth": { 00:24:44.382 "state": "completed", 00:24:44.382 "digest": "sha512", 00:24:44.382 "dhgroup": "ffdhe8192" 00:24:44.382 } 00:24:44.382 } 00:24:44.382 ]' 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.382 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.643 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:N2VmZTE4MTNhMzBiZmU4Mjg5NzM2OGE1ODhjNTQ1YTcJFoN9: --dhchap-ctrl-secret DHHC-1:02:M2NmZDA1ZmZhZWY2MjE0ZTI1OGM0OTJjNjhmNjQyNmQ5MDQ3MzgyYjdmZmY3NmZke0TaFQ==: 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.213 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.473 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.042 00:24:46.042 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:46.042 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:46.043 { 00:24:46.043 "cntlid": 141, 00:24:46.043 "qid": 0, 00:24:46.043 "state": "enabled", 00:24:46.043 "thread": "nvmf_tgt_poll_group_000", 00:24:46.043 "listen_address": { 00:24:46.043 "trtype": "TCP", 00:24:46.043 "adrfam": "IPv4", 00:24:46.043 "traddr": "10.0.0.2", 00:24:46.043 "trsvcid": "4420" 00:24:46.043 }, 00:24:46.043 "peer_address": { 00:24:46.043 "trtype": "TCP", 00:24:46.043 "adrfam": "IPv4", 00:24:46.043 "traddr": "10.0.0.1", 00:24:46.043 "trsvcid": "41874" 00:24:46.043 }, 00:24:46.043 "auth": { 00:24:46.043 "state": "completed", 00:24:46.043 "digest": "sha512", 00:24:46.043 "dhgroup": "ffdhe8192" 00:24:46.043 } 00:24:46.043 } 00:24:46.043 ]' 00:24:46.043 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.310 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.575 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MTY0MGRmZjE2ZTAzNWQ0NjFkZjU2YjlhN2IwYmMzNjgwYjAwMDhjYzMwM2JkOTYyXnKU4Q==: --dhchap-ctrl-secret DHHC-1:01:NGQ2OTQ1MzJiYTljYTczYWEzYzA5MWUxNDYxYzZhZjkPEE46: 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.146 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:47.406 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:47.977 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:47.977 { 00:24:47.977 "cntlid": 143, 00:24:47.977 "qid": 0, 00:24:47.977 "state": "enabled", 00:24:47.977 "thread": "nvmf_tgt_poll_group_000", 00:24:47.977 "listen_address": { 00:24:47.977 "trtype": "TCP", 00:24:47.977 "adrfam": "IPv4", 00:24:47.977 "traddr": "10.0.0.2", 00:24:47.977 "trsvcid": "4420" 00:24:47.977 }, 00:24:47.977 "peer_address": { 00:24:47.977 "trtype": "TCP", 00:24:47.977 "adrfam": "IPv4", 00:24:47.977 "traddr": "10.0.0.1", 00:24:47.977 "trsvcid": "40460" 00:24:47.977 }, 00:24:47.977 "auth": { 00:24:47.977 "state": "completed", 00:24:47.977 "digest": "sha512", 00:24:47.977 "dhgroup": "ffdhe8192" 00:24:47.977 } 00:24:47.977 } 00:24:47.977 ]' 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:47.977 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.237 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.174 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.743 00:24:49.743 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:49.743 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.743 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:50.002 { 00:24:50.002 "cntlid": 145, 00:24:50.002 "qid": 0, 00:24:50.002 "state": "enabled", 00:24:50.002 "thread": "nvmf_tgt_poll_group_000", 00:24:50.002 "listen_address": { 00:24:50.002 "trtype": "TCP", 00:24:50.002 "adrfam": "IPv4", 00:24:50.002 "traddr": "10.0.0.2", 00:24:50.002 "trsvcid": "4420" 00:24:50.002 }, 00:24:50.002 "peer_address": { 00:24:50.002 "trtype": "TCP", 00:24:50.002 "adrfam": "IPv4", 00:24:50.002 "traddr": "10.0.0.1", 00:24:50.002 "trsvcid": "40472" 00:24:50.002 }, 00:24:50.002 "auth": { 00:24:50.002 "state": "completed", 00:24:50.002 "digest": "sha512", 00:24:50.002 "dhgroup": "ffdhe8192" 00:24:50.002 } 00:24:50.002 } 00:24:50.002 ]' 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.002 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.261 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ODQ4MmNmYzc5N2Q3YzNiYjIyMTBjYWY1MWMxMmEyMjM4Y2VmZDliOWM0OTI4Yzky/ePZDw==: --dhchap-ctrl-secret DHHC-1:03:M2ZmMDExZjRmODRjOTY1NjE5YTkwOWQxNjQ5YThhN2Q1YmE5NzFhZGI1ZDFmN2JhNzc5MmQ4NjI0ZWM4MDA5Yia8uyw=: 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:50.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:50.829 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:51.397 request: 00:24:51.397 { 00:24:51.397 "name": "nvme0", 00:24:51.397 "trtype": "tcp", 00:24:51.397 "traddr": "10.0.0.2", 00:24:51.397 "adrfam": "ipv4", 00:24:51.397 "trsvcid": "4420", 00:24:51.397 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:51.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:51.397 "prchk_reftag": false, 00:24:51.397 "prchk_guard": false, 00:24:51.397 "hdgst": false, 00:24:51.397 "ddgst": false, 00:24:51.397 "dhchap_key": "key2", 00:24:51.397 "method": "bdev_nvme_attach_controller", 00:24:51.397 "req_id": 1 00:24:51.397 } 00:24:51.397 Got JSON-RPC error response 00:24:51.397 response: 00:24:51.397 { 00:24:51.397 "code": -5, 00:24:51.397 "message": "Input/output error" 00:24:51.397 } 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.397 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:51.397 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:51.967 request: 00:24:51.967 { 00:24:51.967 "name": "nvme0", 00:24:51.967 "trtype": "tcp", 00:24:51.967 "traddr": "10.0.0.2", 00:24:51.967 "adrfam": "ipv4", 00:24:51.967 "trsvcid": "4420", 00:24:51.967 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:51.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:51.967 "prchk_reftag": false, 00:24:51.967 "prchk_guard": false, 00:24:51.967 "hdgst": false, 00:24:51.967 "ddgst": false, 00:24:51.967 "dhchap_key": "key1", 00:24:51.967 "dhchap_ctrlr_key": "ckey2", 00:24:51.967 "method": "bdev_nvme_attach_controller", 00:24:51.967 "req_id": 1 00:24:51.967 } 00:24:51.967 Got JSON-RPC error response 00:24:51.967 response: 00:24:51.967 { 00:24:51.967 "code": -5, 00:24:51.967 "message": "Input/output error" 00:24:51.967 } 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.967 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.968 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.540 request: 00:24:52.540 { 00:24:52.540 "name": "nvme0", 00:24:52.540 "trtype": "tcp", 00:24:52.540 "traddr": "10.0.0.2", 00:24:52.540 "adrfam": "ipv4", 00:24:52.540 "trsvcid": "4420", 00:24:52.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:52.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:52.540 "prchk_reftag": false, 00:24:52.540 "prchk_guard": false, 00:24:52.540 "hdgst": false, 00:24:52.540 "ddgst": false, 00:24:52.540 "dhchap_key": "key1", 00:24:52.540 "dhchap_ctrlr_key": "ckey1", 00:24:52.540 "method": "bdev_nvme_attach_controller", 00:24:52.540 "req_id": 1 00:24:52.540 } 00:24:52.540 Got JSON-RPC error response 00:24:52.540 response: 00:24:52.540 { 00:24:52.540 "code": -5, 00:24:52.540 "message": "Input/output error" 00:24:52.540 } 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1998790 ']' 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998790' 00:24:52.540 killing process with pid 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1998790 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2024746 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2024746 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2024746 ']' 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.540 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.481 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.481 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:53.481 10:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.481 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.481 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2024746 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2024746 ']' 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.481 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:53.741 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:54.312 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:54.312 { 00:24:54.312 "cntlid": 1, 00:24:54.312 "qid": 0, 00:24:54.312 "state": "enabled", 00:24:54.312 "thread": "nvmf_tgt_poll_group_000", 00:24:54.312 "listen_address": { 00:24:54.312 "trtype": "TCP", 00:24:54.312 "adrfam": "IPv4", 00:24:54.312 "traddr": "10.0.0.2", 00:24:54.312 "trsvcid": "4420" 00:24:54.312 }, 00:24:54.312 "peer_address": { 00:24:54.312 "trtype": "TCP", 00:24:54.312 "adrfam": "IPv4", 00:24:54.312 "traddr": "10.0.0.1", 00:24:54.312 "trsvcid": "40540" 00:24:54.312 }, 00:24:54.312 "auth": { 00:24:54.312 "state": "completed", 00:24:54.312 "digest": "sha512", 00:24:54.312 "dhgroup": "ffdhe8192" 00:24:54.312 } 00:24:54.312 } 00:24:54.312 ]' 00:24:54.312 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:54.573 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:54.834 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmFjZjBjNGZmMTdkNjNhYjBmOTgzNmE0ZGVhMGU3M2YyZjgzNDhkNzcwYWJkYTIzOTdhMWEwY2UzMDMzZGQ1MpoHJWU=: 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.406 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:55.406 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.406 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.406 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.406 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:55.406 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.666 request: 00:24:55.666 { 00:24:55.666 "name": "nvme0", 00:24:55.666 "trtype": "tcp", 00:24:55.666 "traddr": "10.0.0.2", 00:24:55.666 "adrfam": "ipv4", 00:24:55.666 "trsvcid": "4420", 00:24:55.666 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:55.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:55.666 "prchk_reftag": false, 00:24:55.666 "prchk_guard": false, 00:24:55.666 "hdgst": false, 00:24:55.666 "ddgst": false, 00:24:55.666 "dhchap_key": "key3", 00:24:55.666 "method": "bdev_nvme_attach_controller", 00:24:55.666 "req_id": 1 00:24:55.666 } 00:24:55.666 Got JSON-RPC error response 00:24:55.666 response: 00:24:55.666 { 00:24:55.666 "code": -5, 00:24:55.666 "message": "Input/output error" 00:24:55.666 } 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:55.666 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.965 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:56.274 request: 00:24:56.274 { 00:24:56.274 "name": "nvme0", 00:24:56.274 "trtype": "tcp", 00:24:56.274 "traddr": "10.0.0.2", 00:24:56.274 "adrfam": "ipv4", 00:24:56.274 "trsvcid": "4420", 00:24:56.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:56.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:56.274 "prchk_reftag": false, 00:24:56.274 "prchk_guard": false, 00:24:56.274 "hdgst": false, 00:24:56.274 "ddgst": false, 00:24:56.274 "dhchap_key": "key3", 00:24:56.274 "method": "bdev_nvme_attach_controller", 00:24:56.274 "req_id": 1 00:24:56.274 } 00:24:56.274 Got JSON-RPC error response 00:24:56.274 response: 00:24:56.274 { 00:24:56.274 "code": -5, 00:24:56.274 "message": "Input/output error" 00:24:56.274 } 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:56.274 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:56.535 request: 00:24:56.535 { 00:24:56.535 "name": "nvme0", 00:24:56.535 "trtype": "tcp", 00:24:56.535 "traddr": "10.0.0.2", 00:24:56.535 "adrfam": "ipv4", 00:24:56.535 "trsvcid": "4420", 00:24:56.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:56.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:56.535 "prchk_reftag": false, 00:24:56.535 "prchk_guard": false, 00:24:56.535 "hdgst": false, 00:24:56.535 "ddgst": false, 00:24:56.535 "dhchap_key": "key0", 00:24:56.535 "dhchap_ctrlr_key": "key1", 00:24:56.535 "method": "bdev_nvme_attach_controller", 00:24:56.535 "req_id": 1 00:24:56.535 } 00:24:56.535 Got JSON-RPC error response 00:24:56.535 response: 00:24:56.535 { 00:24:56.535 "code": -5, 00:24:56.535 "message": "Input/output error" 00:24:56.535 } 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:56.535 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:56.535 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.795 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1999133 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1999133 ']' 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1999133 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1999133 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1999133' 00:24:57.055 killing process with pid 1999133 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1999133 00:24:57.055 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1999133 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.316 rmmod nvme_tcp 00:24:57.316 rmmod nvme_fabrics 00:24:57.316 rmmod nvme_keyring 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2024746 ']' 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2024746 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2024746 ']' 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2024746 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2024746 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2024746' 00:24:57.316 killing process with pid 2024746 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2024746 00:24:57.316 10:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2024746 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.576 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.482 10:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.482 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DgX /tmp/spdk.key-sha256.Tm8 /tmp/spdk.key-sha384.Gov /tmp/spdk.key-sha512.0ax /tmp/spdk.key-sha512.6Wv /tmp/spdk.key-sha384.JEp /tmp/spdk.key-sha256.L7Q '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:59.482 00:24:59.482 real 2m21.582s 00:24:59.482 user 5m12.959s 00:24:59.482 sys 0m19.645s 00:24:59.482 10:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.482 10:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.482 ************************************ 00:24:59.482 END TEST nvmf_auth_target 00:24:59.482 ************************************ 00:24:59.482 10:41:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:59.482 10:41:05 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:24:59.482 10:41:05 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:59.482 10:41:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:59.482 10:41:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.482 10:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.482 ************************************ 00:24:59.482 START TEST nvmf_bdevio_no_huge 00:24:59.482 ************************************ 00:24:59.482 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:59.754 * Looking for test storage... 00:24:59.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.754 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:24:59.755 10:41:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.895 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:07.896 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:07.896 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:07.896 Found net devices under 0000:31:00.0: cvl_0_0 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:07.896 Found net devices under 0000:31:00.1: cvl_0_1 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:25:07.896 00:25:07.896 --- 10.0.0.2 ping statistics --- 00:25:07.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.896 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:25:07.896 00:25:07.896 --- 10.0.0.1 ping statistics --- 00:25:07.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.896 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2030465 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2030465 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2030465 ']' 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.896 10:41:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:07.896 [2024-07-22 10:41:13.575894] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:07.896 [2024-07-22 10:41:13.575962] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:08.157 [2024-07-22 10:41:13.672132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.157 [2024-07-22 10:41:13.751001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.157 [2024-07-22 10:41:13.751050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.157 [2024-07-22 10:41:13.751058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.157 [2024-07-22 10:41:13.751064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.157 [2024-07-22 10:41:13.751071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.157 [2024-07-22 10:41:13.751231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:08.157 [2024-07-22 10:41:13.751391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:08.157 [2024-07-22 10:41:13.751554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.157 [2024-07-22 10:41:13.751552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.730 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.730 [2024-07-22 10:41:14.421357] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.992 Malloc0 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.992 [2024-07-22 10:41:14.474910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.992 { 00:25:08.992 "params": { 00:25:08.992 "name": "Nvme$subsystem", 00:25:08.992 "trtype": "$TEST_TRANSPORT", 00:25:08.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.992 "adrfam": "ipv4", 00:25:08.992 "trsvcid": "$NVMF_PORT", 00:25:08.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.992 "hdgst": ${hdgst:-false}, 00:25:08.992 "ddgst": ${ddgst:-false} 00:25:08.992 }, 00:25:08.992 "method": "bdev_nvme_attach_controller" 00:25:08.992 } 00:25:08.992 EOF 00:25:08.992 )") 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:25:08.992 10:41:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:08.992 "params": { 00:25:08.992 "name": "Nvme1", 00:25:08.992 "trtype": "tcp", 00:25:08.992 "traddr": "10.0.0.2", 00:25:08.992 "adrfam": "ipv4", 00:25:08.992 "trsvcid": "4420", 00:25:08.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.992 "hdgst": false, 00:25:08.992 "ddgst": false 00:25:08.992 }, 00:25:08.992 "method": "bdev_nvme_attach_controller" 00:25:08.992 }' 00:25:08.992 [2024-07-22 10:41:14.529055] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:08.992 [2024-07-22 10:41:14.529130] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2030583 ] 00:25:08.992 [2024-07-22 10:41:14.600158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:08.992 [2024-07-22 10:41:14.670740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.992 [2024-07-22 10:41:14.670861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.992 [2024-07-22 10:41:14.670865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.562 I/O targets: 00:25:09.562 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:09.562 00:25:09.562 00:25:09.562 CUnit - A unit testing framework for C - Version 2.1-3 00:25:09.562 http://cunit.sourceforge.net/ 00:25:09.562 00:25:09.562 00:25:09.562 Suite: bdevio tests on: Nvme1n1 00:25:09.562 Test: blockdev write read block ...passed 00:25:09.562 Test: blockdev write zeroes read block ...passed 00:25:09.562 Test: blockdev write zeroes read no split ...passed 00:25:09.562 Test: blockdev write zeroes read split ...passed 00:25:09.562 Test: blockdev write zeroes read split partial ...passed 00:25:09.562 Test: blockdev reset ...[2024-07-22 10:41:15.154727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.562 [2024-07-22 10:41:15.154795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1d110 (9): Bad file descriptor 00:25:09.562 [2024-07-22 10:41:15.174355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:09.562 passed 00:25:09.562 Test: blockdev write read 8 blocks ...passed 00:25:09.562 Test: blockdev write read size > 128k ...passed 00:25:09.562 Test: blockdev write read invalid size ...passed 00:25:09.562 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:09.562 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:09.562 Test: blockdev write read max offset ...passed 00:25:09.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:09.821 Test: blockdev writev readv 8 blocks ...passed 00:25:09.821 Test: blockdev writev readv 30 x 1block ...passed 00:25:09.821 Test: blockdev writev readv block ...passed 00:25:09.821 Test: blockdev writev readv size > 128k ...passed 00:25:09.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:09.821 Test: blockdev comparev and writev ...[2024-07-22 10:41:15.399468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.399502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.399514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.399519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.400020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.400043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.400531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.400540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.400550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.400555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.401029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.821 [2024-07-22 10:41:15.401047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.821 [2024-07-22 10:41:15.401052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.821 passed 00:25:09.821 Test: blockdev nvme passthru rw ...passed 00:25:09.821 Test: blockdev nvme passthru vendor specific ...[2024-07-22 10:41:15.486241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.821 [2024-07-22 10:41:15.486252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.822 [2024-07-22 10:41:15.486605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.822 [2024-07-22 10:41:15.486613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.822 [2024-07-22 10:41:15.486968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.822 [2024-07-22 10:41:15.486976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.822 [2024-07-22 10:41:15.487367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.822 [2024-07-22 10:41:15.487376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.822 passed 00:25:09.822 Test: blockdev nvme admin passthru ...passed 00:25:10.081 Test: blockdev copy ...passed 00:25:10.081 00:25:10.081 Run Summary: Type Total Ran Passed Failed Inactive 00:25:10.081 suites 1 1 n/a 0 0 00:25:10.081 tests 23 23 23 0 0 00:25:10.081 asserts 152 152 152 0 n/a 00:25:10.081 00:25:10.081 Elapsed time = 1.148 seconds 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.340 rmmod nvme_tcp 00:25:10.340 rmmod nvme_fabrics 00:25:10.340 rmmod nvme_keyring 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2030465 ']' 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2030465 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2030465 ']' 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2030465 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2030465 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2030465' 00:25:10.340 killing process with pid 2030465 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2030465 00:25:10.340 10:41:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2030465 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.599 10:41:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.505 10:41:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:12.766 00:25:12.766 real 0m13.027s 00:25:12.766 user 0m14.194s 00:25:12.766 sys 0m6.901s 00:25:12.766 10:41:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.766 10:41:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:12.766 ************************************ 00:25:12.766 END TEST nvmf_bdevio_no_huge 00:25:12.766 ************************************ 00:25:12.766 10:41:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:12.766 10:41:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:12.766 10:41:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:12.766 10:41:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.766 10:41:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.766 ************************************ 00:25:12.766 START TEST nvmf_tls 00:25:12.766 ************************************ 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:12.766 * Looking for test storage... 00:25:12.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.766 10:41:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:25:12.767 10:41:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.908 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:20.909 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:20.909 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:20.909 Found net devices under 0000:31:00.0: cvl_0_0 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:20.909 Found net devices under 0000:31:00.1: cvl_0_1 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.762 ms 00:25:20.909 00:25:20.909 --- 10.0.0.2 ping statistics --- 00:25:20.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.909 rtt min/avg/max/mdev = 0.762/0.762/0.762/0.000 ms 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:20.909 00:25:20.909 --- 10.0.0.1 ping statistics --- 00:25:20.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.909 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2035508 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2035508 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2035508 ']' 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.909 10:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.909 [2024-07-22 10:41:26.525098] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:20.909 [2024-07-22 10:41:26.525149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.909 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.170 [2024-07-22 10:41:26.618179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.170 [2024-07-22 10:41:26.663846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.170 [2024-07-22 10:41:26.663903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.170 [2024-07-22 10:41:26.663912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.170 [2024-07-22 10:41:26.663918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.170 [2024-07-22 10:41:26.663924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.170 [2024-07-22 10:41:26.663959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:25:21.742 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:22.004 true 00:25:22.004 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.004 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:25:22.004 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:25:22.004 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:25:22.004 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:22.266 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.266 10:41:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:25:22.527 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:25:22.527 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:25:22.527 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:22.527 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.527 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:25:22.788 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:25:22.788 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:25:22.788 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.788 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:25:23.048 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:25:23.049 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:25:23.049 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:23.049 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:23.049 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:25:23.309 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:25:23.309 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:25:23.309 10:41:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:23.570 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:23.571 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:25:23.571 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:23.571 10:41:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TReI68Mgjt 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.kTA3Pa6KjJ 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TReI68Mgjt 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kTA3Pa6KjJ 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:23.832 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:24.093 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TReI68Mgjt 00:25:24.093 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TReI68Mgjt 00:25:24.093 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.354 [2024-07-22 10:41:29.892649] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.354 10:41:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:24.628 10:41:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:24.628 [2024-07-22 10:41:30.205341] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.628 [2024-07-22 10:41:30.205545] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.628 10:41:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:24.888 malloc0 00:25:24.888 10:41:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:24.888 10:41:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TReI68Mgjt 00:25:25.147 [2024-07-22 10:41:30.668494] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:25.147 10:41:30 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TReI68Mgjt 00:25:25.147 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.137 Initializing NVMe Controllers 00:25:35.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.137 Initialization complete. Launching workers. 00:25:35.137 ======================================================== 00:25:35.137 Latency(us) 00:25:35.137 Device Information : IOPS MiB/s Average min max 00:25:35.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19100.65 74.61 3350.67 1026.05 4009.16 00:25:35.137 ======================================================== 00:25:35.137 Total : 19100.65 74.61 3350.67 1026.05 4009.16 00:25:35.137 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TReI68Mgjt 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TReI68Mgjt' 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2038257 00:25:35.137 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2038257 /var/tmp/bdevperf.sock 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2038257 ']' 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.138 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.398 [2024-07-22 10:41:40.843943] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:35.398 [2024-07-22 10:41:40.844010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038257 ] 00:25:35.398 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.398 [2024-07-22 10:41:40.899591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.398 [2024-07-22 10:41:40.927787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.398 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.398 10:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:35.398 10:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TReI68Mgjt 00:25:35.659 [2024-07-22 10:41:41.134038] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.659 [2024-07-22 10:41:41.134097] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:35.659 TLSTESTn1 00:25:35.659 10:41:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.659 Running I/O for 10 seconds... 00:25:47.883 00:25:47.883 Latency(us) 00:25:47.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.883 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:47.883 Verification LBA range: start 0x0 length 0x2000 00:25:47.883 TLSTESTn1 : 10.04 6002.77 23.45 0.00 0.00 21266.27 5543.25 53084.16 00:25:47.883 =================================================================================================================== 00:25:47.883 Total : 6002.77 23.45 0.00 0.00 21266.27 5543.25 53084.16 00:25:47.883 0 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2038257 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2038257 ']' 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2038257 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2038257 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2038257' 00:25:47.883 killing process with pid 2038257 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2038257 00:25:47.883 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.883 00:25:47.883 Latency(us) 00:25:47.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.883 =================================================================================================================== 00:25:47.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.883 [2024-07-22 10:41:51.465992] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2038257 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTA3Pa6KjJ 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTA3Pa6KjJ 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTA3Pa6KjJ 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kTA3Pa6KjJ' 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2040359 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2040359 /var/tmp/bdevperf.sock 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2040359 ']' 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.883 [2024-07-22 10:41:51.629376] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:47.883 [2024-07-22 10:41:51.629453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040359 ] 00:25:47.883 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.883 [2024-07-22 10:41:51.683065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.883 [2024-07-22 10:41:51.710907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.883 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTA3Pa6KjJ 00:25:47.883 [2024-07-22 10:41:51.917272] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.883 [2024-07-22 10:41:51.917331] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:47.883 [2024-07-22 10:41:51.923829] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.883 [2024-07-22 10:41:51.924179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1914cf0 (107): Transport endpoint is not connected 00:25:47.883 [2024-07-22 10:41:51.925174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1914cf0 (9): Bad file descriptor 00:25:47.883 [2024-07-22 10:41:51.926176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.883 [2024-07-22 10:41:51.926189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:47.883 [2024-07-22 10:41:51.926197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.883 request: 00:25:47.883 { 00:25:47.883 "name": "TLSTEST", 00:25:47.883 "trtype": "tcp", 00:25:47.883 "traddr": "10.0.0.2", 00:25:47.883 "adrfam": "ipv4", 00:25:47.883 "trsvcid": "4420", 00:25:47.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.884 "prchk_reftag": false, 00:25:47.884 "prchk_guard": false, 00:25:47.884 "hdgst": false, 00:25:47.884 "ddgst": false, 00:25:47.884 "psk": "/tmp/tmp.kTA3Pa6KjJ", 00:25:47.884 "method": "bdev_nvme_attach_controller", 00:25:47.884 "req_id": 1 00:25:47.884 } 00:25:47.884 Got JSON-RPC error response 00:25:47.884 response: 00:25:47.884 { 00:25:47.884 "code": -5, 00:25:47.884 "message": "Input/output error" 00:25:47.884 } 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2040359 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2040359 ']' 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2040359 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040359 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040359' 00:25:47.884 killing process with pid 2040359 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2040359 00:25:47.884 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.884 00:25:47.884 Latency(us) 00:25:47.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.884 =================================================================================================================== 00:25:47.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.884 [2024-07-22 10:41:51.995666] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.884 10:41:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2040359 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TReI68Mgjt 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TReI68Mgjt 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TReI68Mgjt 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TReI68Mgjt' 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2040579 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2040579 /var/tmp/bdevperf.sock 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2040579 ']' 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.884 [2024-07-22 10:41:52.143361] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:47.884 [2024-07-22 10:41:52.143422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040579 ] 00:25:47.884 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.884 [2024-07-22 10:41:52.197387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.884 [2024-07-22 10:41:52.225153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TReI68Mgjt 00:25:47.884 [2024-07-22 10:41:52.427206] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.884 [2024-07-22 10:41:52.427265] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:47.884 [2024-07-22 10:41:52.434968] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:47.884 [2024-07-22 10:41:52.434987] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:47.884 [2024-07-22 10:41:52.435008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.884 [2024-07-22 10:41:52.435272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb18cf0 (107): Transport endpoint is not connected 00:25:47.884 [2024-07-22 10:41:52.436267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb18cf0 (9): Bad file descriptor 00:25:47.884 [2024-07-22 10:41:52.437270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.884 [2024-07-22 10:41:52.437281] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:47.884 [2024-07-22 10:41:52.437288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.884 request: 00:25:47.884 { 00:25:47.884 "name": "TLSTEST", 00:25:47.884 "trtype": "tcp", 00:25:47.884 "traddr": "10.0.0.2", 00:25:47.884 "adrfam": "ipv4", 00:25:47.884 "trsvcid": "4420", 00:25:47.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.884 "prchk_reftag": false, 00:25:47.884 "prchk_guard": false, 00:25:47.884 "hdgst": false, 00:25:47.884 "ddgst": false, 00:25:47.884 "psk": "/tmp/tmp.TReI68Mgjt", 00:25:47.884 "method": "bdev_nvme_attach_controller", 00:25:47.884 "req_id": 1 00:25:47.884 } 00:25:47.884 Got JSON-RPC error response 00:25:47.884 response: 00:25:47.884 { 00:25:47.884 "code": -5, 00:25:47.884 "message": "Input/output error" 00:25:47.884 } 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2040579 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2040579 ']' 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2040579 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040579 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.884 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040579' 00:25:47.884 killing process with pid 2040579 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2040579 00:25:47.885 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.885 00:25:47.885 Latency(us) 00:25:47.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.885 =================================================================================================================== 00:25:47.885 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.885 [2024-07-22 10:41:52.506078] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2040579 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TReI68Mgjt 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TReI68Mgjt 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TReI68Mgjt 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TReI68Mgjt' 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2040594 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2040594 /var/tmp/bdevperf.sock 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2040594 ']' 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.885 [2024-07-22 10:41:52.654947] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:47.885 [2024-07-22 10:41:52.655001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040594 ] 00:25:47.885 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.885 [2024-07-22 10:41:52.709867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.885 [2024-07-22 10:41:52.735784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TReI68Mgjt 00:25:47.885 [2024-07-22 10:41:52.950216] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.885 [2024-07-22 10:41:52.950281] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:47.885 [2024-07-22 10:41:52.956242] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:47.885 [2024-07-22 10:41:52.956261] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:47.885 [2024-07-22 10:41:52.956280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.885 [2024-07-22 10:41:52.957253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bdcf0 (107): Transport endpoint is not connected 00:25:47.885 [2024-07-22 10:41:52.958249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bdcf0 (9): Bad file descriptor 00:25:47.885 [2024-07-22 10:41:52.959251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:47.885 [2024-07-22 10:41:52.959262] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:47.885 [2024-07-22 10:41:52.959269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:47.885 request: 00:25:47.885 { 00:25:47.885 "name": "TLSTEST", 00:25:47.885 "trtype": "tcp", 00:25:47.885 "traddr": "10.0.0.2", 00:25:47.885 "adrfam": "ipv4", 00:25:47.885 "trsvcid": "4420", 00:25:47.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:47.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.885 "prchk_reftag": false, 00:25:47.885 "prchk_guard": false, 00:25:47.885 "hdgst": false, 00:25:47.885 "ddgst": false, 00:25:47.885 "psk": "/tmp/tmp.TReI68Mgjt", 00:25:47.885 "method": "bdev_nvme_attach_controller", 00:25:47.885 "req_id": 1 00:25:47.885 } 00:25:47.885 Got JSON-RPC error response 00:25:47.885 response: 00:25:47.885 { 00:25:47.885 "code": -5, 00:25:47.885 "message": "Input/output error" 00:25:47.885 } 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2040594 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2040594 ']' 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2040594 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.885 10:41:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040594 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040594' 00:25:47.885 killing process with pid 2040594 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2040594 00:25:47.885 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.885 00:25:47.885 Latency(us) 00:25:47.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.885 =================================================================================================================== 00:25:47.885 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.885 [2024-07-22 10:41:53.045919] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2040594 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2040638 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2040638 /var/tmp/bdevperf.sock 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2040638 ']' 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.885 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.885 [2024-07-22 10:41:53.200378] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:47.885 [2024-07-22 10:41:53.200433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040638 ] 00:25:47.885 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.886 [2024-07-22 10:41:53.255421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.886 [2024-07-22 10:41:53.282547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:47.886 [2024-07-22 10:41:53.507604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.886 [2024-07-22 10:41:53.509548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53fed0 (9): Bad file descriptor 00:25:47.886 [2024-07-22 10:41:53.510547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.886 [2024-07-22 10:41:53.510555] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:47.886 [2024-07-22 10:41:53.510563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.886 request: 00:25:47.886 { 00:25:47.886 "name": "TLSTEST", 00:25:47.886 "trtype": "tcp", 00:25:47.886 "traddr": "10.0.0.2", 00:25:47.886 "adrfam": "ipv4", 00:25:47.886 "trsvcid": "4420", 00:25:47.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.886 "prchk_reftag": false, 00:25:47.886 "prchk_guard": false, 00:25:47.886 "hdgst": false, 00:25:47.886 "ddgst": false, 00:25:47.886 "method": "bdev_nvme_attach_controller", 00:25:47.886 "req_id": 1 00:25:47.886 } 00:25:47.886 Got JSON-RPC error response 00:25:47.886 response: 00:25:47.886 { 00:25:47.886 "code": -5, 00:25:47.886 "message": "Input/output error" 00:25:47.886 } 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2040638 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2040638 ']' 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2040638 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.886 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040638 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040638' 00:25:48.146 killing process with pid 2040638 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2040638 00:25:48.146 Received shutdown signal, test time was about 10.000000 seconds 00:25:48.146 00:25:48.146 Latency(us) 00:25:48.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.146 =================================================================================================================== 00:25:48.146 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2040638 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2035508 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2035508 ']' 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2035508 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2035508 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2035508' 00:25:48.146 killing process with pid 2035508 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2035508 00:25:48.146 [2024-07-22 10:41:53.747124] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:48.146 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2035508 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.n08PQwUmHc 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.n08PQwUmHc 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2040955 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2040955 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2040955 ']' 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.407 10:41:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.407 [2024-07-22 10:41:53.967650] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:48.407 [2024-07-22 10:41:53.967704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.407 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.407 [2024-07-22 10:41:54.055074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.407 [2024-07-22 10:41:54.084146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.407 [2024-07-22 10:41:54.084178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.407 [2024-07-22 10:41:54.084184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.407 [2024-07-22 10:41:54.084188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.407 [2024-07-22 10:41:54.084193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.407 [2024-07-22 10:41:54.084207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n08PQwUmHc 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:49.358 [2024-07-22 10:41:54.913571] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.358 10:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:49.654 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:49.654 [2024-07-22 10:41:55.222317] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:49.654 [2024-07-22 10:41:55.222516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.655 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:49.915 malloc0 00:25:49.915 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:49.916 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:25:50.176 [2024-07-22 10:41:55.685349] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n08PQwUmHc 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n08PQwUmHc' 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2041311 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2041311 /var/tmp/bdevperf.sock 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2041311 ']' 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.176 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.177 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.177 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.177 [2024-07-22 10:41:55.730327] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:25:50.177 [2024-07-22 10:41:55.730375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041311 ] 00:25:50.177 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.177 [2024-07-22 10:41:55.788115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.177 [2024-07-22 10:41:55.816359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.437 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.437 10:41:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:50.437 10:41:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:25:50.437 [2024-07-22 10:41:56.030863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.437 [2024-07-22 10:41:56.030927] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:50.437 TLSTESTn1 00:25:50.437 10:41:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:50.705 Running I/O for 10 seconds... 00:26:00.716 00:26:00.716 Latency(us) 00:26:00.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.716 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:00.716 Verification LBA range: start 0x0 length 0x2000 00:26:00.716 TLSTESTn1 : 10.03 4365.06 17.05 0.00 0.00 29266.95 4997.12 108352.85 00:26:00.716 =================================================================================================================== 00:26:00.716 Total : 4365.06 17.05 0.00 0.00 29266.95 4997.12 108352.85 00:26:00.716 0 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2041311 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2041311 ']' 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2041311 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2041311 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2041311' 00:26:00.716 killing process with pid 2041311 00:26:00.716 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2041311 00:26:00.716 Received shutdown signal, test time was about 10.000000 seconds 00:26:00.716 00:26:00.716 Latency(us) 00:26:00.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.716 =================================================================================================================== 00:26:00.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.717 [2024-07-22 10:42:06.344751] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:00.717 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2041311 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.n08PQwUmHc 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n08PQwUmHc 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n08PQwUmHc 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n08PQwUmHc 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n08PQwUmHc' 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2043433 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2043433 /var/tmp/bdevperf.sock 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2043433 ']' 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.977 [2024-07-22 10:42:06.505333] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:00.977 [2024-07-22 10:42:06.505391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043433 ] 00:26:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.977 [2024-07-22 10:42:06.560355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.977 [2024-07-22 10:42:06.586366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:00.977 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:26:01.238 [2024-07-22 10:42:06.800795] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:01.238 [2024-07-22 10:42:06.800840] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:01.238 [2024-07-22 10:42:06.800846] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.n08PQwUmHc 00:26:01.238 request: 00:26:01.238 { 00:26:01.238 "name": "TLSTEST", 00:26:01.238 "trtype": "tcp", 00:26:01.238 "traddr": "10.0.0.2", 00:26:01.238 "adrfam": "ipv4", 00:26:01.238 "trsvcid": "4420", 00:26:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.238 "prchk_reftag": false, 00:26:01.238 "prchk_guard": false, 00:26:01.238 "hdgst": false, 00:26:01.238 "ddgst": false, 00:26:01.238 "psk": "/tmp/tmp.n08PQwUmHc", 00:26:01.238 "method": "bdev_nvme_attach_controller", 00:26:01.238 "req_id": 1 00:26:01.238 } 00:26:01.238 Got JSON-RPC error response 00:26:01.238 response: 00:26:01.238 { 00:26:01.238 "code": -1, 00:26:01.238 "message": "Operation not permitted" 00:26:01.238 } 00:26:01.238 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2043433 00:26:01.238 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2043433 ']' 00:26:01.238 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2043433 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2043433 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2043433' 00:26:01.239 killing process with pid 2043433 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2043433 00:26:01.239 Received shutdown signal, test time was about 10.000000 seconds 00:26:01.239 00:26:01.239 Latency(us) 00:26:01.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.239 =================================================================================================================== 00:26:01.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:01.239 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2043433 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2040955 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2040955 ']' 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2040955 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.501 10:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040955 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040955' 00:26:01.501 killing process with pid 2040955 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2040955 00:26:01.501 [2024-07-22 10:42:07.040235] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2040955 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2043507 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2043507 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2043507 ']' 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.501 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:01.761 [2024-07-22 10:42:07.210477] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:01.761 [2024-07-22 10:42:07.210533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.761 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.761 [2024-07-22 10:42:07.298654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.761 [2024-07-22 10:42:07.329111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.761 [2024-07-22 10:42:07.329150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.761 [2024-07-22 10:42:07.329155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.761 [2024-07-22 10:42:07.329160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.761 [2024-07-22 10:42:07.329164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.761 [2024-07-22 10:42:07.329182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.332 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.332 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:02.332 10:42:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.332 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.332 10:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n08PQwUmHc 00:26:02.332 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:02.592 [2024-07-22 10:42:08.196097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.592 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:02.852 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:02.852 [2024-07-22 10:42:08.504852] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:02.852 [2024-07-22 10:42:08.505046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.852 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:03.112 malloc0 00:26:03.112 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:26:03.371 [2024-07-22 10:42:08.939888] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:03.371 [2024-07-22 10:42:08.939912] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:03.371 [2024-07-22 10:42:08.939932] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:03.371 request: 00:26:03.371 { 00:26:03.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.371 "host": "nqn.2016-06.io.spdk:host1", 00:26:03.371 "psk": "/tmp/tmp.n08PQwUmHc", 00:26:03.371 "method": "nvmf_subsystem_add_host", 00:26:03.371 "req_id": 1 00:26:03.371 } 00:26:03.371 Got JSON-RPC error response 00:26:03.371 response: 00:26:03.371 { 00:26:03.371 "code": -32603, 00:26:03.371 "message": "Internal error" 00:26:03.371 } 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2043507 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2043507 ']' 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2043507 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.371 10:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2043507 00:26:03.371 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:03.371 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:03.371 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2043507' 00:26:03.371 killing process with pid 2043507 00:26:03.371 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2043507 00:26:03.371 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2043507 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.n08PQwUmHc 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2044223 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2044223 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2044223 ']' 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.636 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.636 [2024-07-22 10:42:09.185852] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:03.636 [2024-07-22 10:42:09.185908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.636 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.636 [2024-07-22 10:42:09.274622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.636 [2024-07-22 10:42:09.305558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.636 [2024-07-22 10:42:09.305595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.636 [2024-07-22 10:42:09.305601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.636 [2024-07-22 10:42:09.305607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.636 [2024-07-22 10:42:09.305611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.636 [2024-07-22 10:42:09.305629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n08PQwUmHc 00:26:04.576 10:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:04.576 [2024-07-22 10:42:10.124549] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.576 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:04.835 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:04.835 [2024-07-22 10:42:10.433291] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:04.835 [2024-07-22 10:42:10.433484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.835 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:05.095 malloc0 00:26:05.095 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:05.095 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:26:05.354 [2024-07-22 10:42:10.868302] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2044794 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2044794 /var/tmp/bdevperf.sock 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2044794 ']' 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.354 10:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:05.354 [2024-07-22 10:42:10.930526] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:05.354 [2024-07-22 10:42:10.930577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044794 ] 00:26:05.354 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.354 [2024-07-22 10:42:10.984380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.354 [2024-07-22 10:42:11.012509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.294 10:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.294 10:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:06.294 10:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:26:06.294 [2024-07-22 10:42:11.816348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.294 [2024-07-22 10:42:11.816411] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:06.294 TLSTESTn1 00:26:06.294 10:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:06.555 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:26:06.555 "subsystems": [ 00:26:06.555 { 00:26:06.555 "subsystem": "keyring", 00:26:06.555 "config": [] 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "subsystem": "iobuf", 00:26:06.555 "config": [ 00:26:06.555 { 00:26:06.555 "method": "iobuf_set_options", 00:26:06.555 "params": { 00:26:06.555 "small_pool_count": 8192, 00:26:06.555 "large_pool_count": 1024, 00:26:06.555 "small_bufsize": 8192, 00:26:06.555 "large_bufsize": 135168 00:26:06.555 } 00:26:06.555 } 00:26:06.555 ] 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "subsystem": "sock", 00:26:06.555 "config": [ 00:26:06.555 { 00:26:06.555 "method": "sock_set_default_impl", 00:26:06.555 "params": { 00:26:06.555 "impl_name": "posix" 00:26:06.555 } 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "method": "sock_impl_set_options", 00:26:06.555 "params": { 00:26:06.555 "impl_name": "ssl", 00:26:06.555 "recv_buf_size": 4096, 00:26:06.555 "send_buf_size": 4096, 00:26:06.555 "enable_recv_pipe": true, 00:26:06.555 "enable_quickack": false, 00:26:06.555 "enable_placement_id": 0, 00:26:06.555 "enable_zerocopy_send_server": true, 00:26:06.555 "enable_zerocopy_send_client": false, 00:26:06.555 "zerocopy_threshold": 0, 00:26:06.555 "tls_version": 0, 00:26:06.555 "enable_ktls": false 00:26:06.555 } 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "method": "sock_impl_set_options", 00:26:06.555 "params": { 00:26:06.555 "impl_name": "posix", 00:26:06.555 "recv_buf_size": 2097152, 00:26:06.555 "send_buf_size": 2097152, 00:26:06.555 "enable_recv_pipe": true, 00:26:06.555 "enable_quickack": false, 00:26:06.555 "enable_placement_id": 0, 00:26:06.555 "enable_zerocopy_send_server": true, 00:26:06.555 "enable_zerocopy_send_client": false, 00:26:06.555 "zerocopy_threshold": 0, 00:26:06.555 "tls_version": 0, 00:26:06.555 "enable_ktls": false 00:26:06.555 } 00:26:06.555 } 00:26:06.555 ] 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "subsystem": "vmd", 00:26:06.555 "config": [] 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "subsystem": "accel", 00:26:06.555 "config": [ 00:26:06.555 { 00:26:06.555 "method": "accel_set_options", 00:26:06.555 "params": { 00:26:06.555 "small_cache_size": 128, 00:26:06.555 "large_cache_size": 16, 00:26:06.555 "task_count": 2048, 00:26:06.555 "sequence_count": 2048, 00:26:06.555 "buf_count": 2048 00:26:06.555 } 00:26:06.555 } 00:26:06.555 ] 00:26:06.555 }, 00:26:06.555 { 00:26:06.555 "subsystem": "bdev", 00:26:06.555 "config": [ 00:26:06.555 { 00:26:06.555 "method": "bdev_set_options", 00:26:06.555 "params": { 00:26:06.555 "bdev_io_pool_size": 65535, 00:26:06.555 "bdev_io_cache_size": 256, 00:26:06.555 "bdev_auto_examine": true, 00:26:06.555 "iobuf_small_cache_size": 128, 00:26:06.556 "iobuf_large_cache_size": 16 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_raid_set_options", 00:26:06.556 "params": { 00:26:06.556 "process_window_size_kb": 1024, 00:26:06.556 "process_max_bandwidth_mb_sec": 0 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_iscsi_set_options", 00:26:06.556 "params": { 00:26:06.556 "timeout_sec": 30 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_nvme_set_options", 00:26:06.556 "params": { 00:26:06.556 "action_on_timeout": "none", 00:26:06.556 "timeout_us": 0, 00:26:06.556 "timeout_admin_us": 0, 00:26:06.556 "keep_alive_timeout_ms": 10000, 00:26:06.556 "arbitration_burst": 0, 00:26:06.556 "low_priority_weight": 0, 00:26:06.556 "medium_priority_weight": 0, 00:26:06.556 "high_priority_weight": 0, 00:26:06.556 "nvme_adminq_poll_period_us": 10000, 00:26:06.556 "nvme_ioq_poll_period_us": 0, 00:26:06.556 "io_queue_requests": 0, 00:26:06.556 "delay_cmd_submit": true, 00:26:06.556 "transport_retry_count": 4, 00:26:06.556 "bdev_retry_count": 3, 00:26:06.556 "transport_ack_timeout": 0, 00:26:06.556 "ctrlr_loss_timeout_sec": 0, 00:26:06.556 "reconnect_delay_sec": 0, 00:26:06.556 "fast_io_fail_timeout_sec": 0, 00:26:06.556 "disable_auto_failback": false, 00:26:06.556 "generate_uuids": false, 00:26:06.556 "transport_tos": 0, 00:26:06.556 "nvme_error_stat": false, 00:26:06.556 "rdma_srq_size": 0, 00:26:06.556 "io_path_stat": false, 00:26:06.556 "allow_accel_sequence": false, 00:26:06.556 "rdma_max_cq_size": 0, 00:26:06.556 "rdma_cm_event_timeout_ms": 0, 00:26:06.556 "dhchap_digests": [ 00:26:06.556 "sha256", 00:26:06.556 "sha384", 00:26:06.556 "sha512" 00:26:06.556 ], 00:26:06.556 "dhchap_dhgroups": [ 00:26:06.556 "null", 00:26:06.556 "ffdhe2048", 00:26:06.556 "ffdhe3072", 00:26:06.556 "ffdhe4096", 00:26:06.556 "ffdhe6144", 00:26:06.556 "ffdhe8192" 00:26:06.556 ] 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_nvme_set_hotplug", 00:26:06.556 "params": { 00:26:06.556 "period_us": 100000, 00:26:06.556 "enable": false 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_malloc_create", 00:26:06.556 "params": { 00:26:06.556 "name": "malloc0", 00:26:06.556 "num_blocks": 8192, 00:26:06.556 "block_size": 4096, 00:26:06.556 "physical_block_size": 4096, 00:26:06.556 "uuid": "463a8685-91fc-4bc0-9762-c90854160f3e", 00:26:06.556 "optimal_io_boundary": 0 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "bdev_wait_for_examine" 00:26:06.556 } 00:26:06.556 ] 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "subsystem": "nbd", 00:26:06.556 "config": [] 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "subsystem": "scheduler", 00:26:06.556 "config": [ 00:26:06.556 { 00:26:06.556 "method": "framework_set_scheduler", 00:26:06.556 "params": { 00:26:06.556 "name": "static" 00:26:06.556 } 00:26:06.556 } 00:26:06.556 ] 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "subsystem": "nvmf", 00:26:06.556 "config": [ 00:26:06.556 { 00:26:06.556 "method": "nvmf_set_config", 00:26:06.556 "params": { 00:26:06.556 "discovery_filter": "match_any", 00:26:06.556 "admin_cmd_passthru": { 00:26:06.556 "identify_ctrlr": false 00:26:06.556 } 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_set_max_subsystems", 00:26:06.556 "params": { 00:26:06.556 "max_subsystems": 1024 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_set_crdt", 00:26:06.556 "params": { 00:26:06.556 "crdt1": 0, 00:26:06.556 "crdt2": 0, 00:26:06.556 "crdt3": 0 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_create_transport", 00:26:06.556 "params": { 00:26:06.556 "trtype": "TCP", 00:26:06.556 "max_queue_depth": 128, 00:26:06.556 "max_io_qpairs_per_ctrlr": 127, 00:26:06.556 "in_capsule_data_size": 4096, 00:26:06.556 "max_io_size": 131072, 00:26:06.556 "io_unit_size": 131072, 00:26:06.556 "max_aq_depth": 128, 00:26:06.556 "num_shared_buffers": 511, 00:26:06.556 "buf_cache_size": 4294967295, 00:26:06.556 "dif_insert_or_strip": false, 00:26:06.556 "zcopy": false, 00:26:06.556 "c2h_success": false, 00:26:06.556 "sock_priority": 0, 00:26:06.556 "abort_timeout_sec": 1, 00:26:06.556 "ack_timeout": 0, 00:26:06.556 "data_wr_pool_size": 0 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_create_subsystem", 00:26:06.556 "params": { 00:26:06.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.556 "allow_any_host": false, 00:26:06.556 "serial_number": "SPDK00000000000001", 00:26:06.556 "model_number": "SPDK bdev Controller", 00:26:06.556 "max_namespaces": 10, 00:26:06.556 "min_cntlid": 1, 00:26:06.556 "max_cntlid": 65519, 00:26:06.556 "ana_reporting": false 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_subsystem_add_host", 00:26:06.556 "params": { 00:26:06.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.556 "host": "nqn.2016-06.io.spdk:host1", 00:26:06.556 "psk": "/tmp/tmp.n08PQwUmHc" 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_subsystem_add_ns", 00:26:06.556 "params": { 00:26:06.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.556 "namespace": { 00:26:06.556 "nsid": 1, 00:26:06.556 "bdev_name": "malloc0", 00:26:06.556 "nguid": "463A868591FC4BC09762C90854160F3E", 00:26:06.556 "uuid": "463a8685-91fc-4bc0-9762-c90854160f3e", 00:26:06.556 "no_auto_visible": false 00:26:06.556 } 00:26:06.556 } 00:26:06.556 }, 00:26:06.556 { 00:26:06.556 "method": "nvmf_subsystem_add_listener", 00:26:06.556 "params": { 00:26:06.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.556 "listen_address": { 00:26:06.556 "trtype": "TCP", 00:26:06.556 "adrfam": "IPv4", 00:26:06.556 "traddr": "10.0.0.2", 00:26:06.556 "trsvcid": "4420" 00:26:06.556 }, 00:26:06.556 "secure_channel": true 00:26:06.556 } 00:26:06.556 } 00:26:06.556 ] 00:26:06.556 } 00:26:06.556 ] 00:26:06.556 }' 00:26:06.556 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:06.818 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:26:06.818 "subsystems": [ 00:26:06.818 { 00:26:06.818 "subsystem": "keyring", 00:26:06.818 "config": [] 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "subsystem": "iobuf", 00:26:06.818 "config": [ 00:26:06.818 { 00:26:06.818 "method": "iobuf_set_options", 00:26:06.818 "params": { 00:26:06.818 "small_pool_count": 8192, 00:26:06.818 "large_pool_count": 1024, 00:26:06.818 "small_bufsize": 8192, 00:26:06.818 "large_bufsize": 135168 00:26:06.818 } 00:26:06.818 } 00:26:06.818 ] 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "subsystem": "sock", 00:26:06.818 "config": [ 00:26:06.818 { 00:26:06.818 "method": "sock_set_default_impl", 00:26:06.818 "params": { 00:26:06.818 "impl_name": "posix" 00:26:06.818 } 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "method": "sock_impl_set_options", 00:26:06.818 "params": { 00:26:06.818 "impl_name": "ssl", 00:26:06.818 "recv_buf_size": 4096, 00:26:06.818 "send_buf_size": 4096, 00:26:06.818 "enable_recv_pipe": true, 00:26:06.818 "enable_quickack": false, 00:26:06.818 "enable_placement_id": 0, 00:26:06.818 "enable_zerocopy_send_server": true, 00:26:06.818 "enable_zerocopy_send_client": false, 00:26:06.818 "zerocopy_threshold": 0, 00:26:06.818 "tls_version": 0, 00:26:06.818 "enable_ktls": false 00:26:06.818 } 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "method": "sock_impl_set_options", 00:26:06.818 "params": { 00:26:06.818 "impl_name": "posix", 00:26:06.818 "recv_buf_size": 2097152, 00:26:06.818 "send_buf_size": 2097152, 00:26:06.818 "enable_recv_pipe": true, 00:26:06.818 "enable_quickack": false, 00:26:06.818 "enable_placement_id": 0, 00:26:06.818 "enable_zerocopy_send_server": true, 00:26:06.818 "enable_zerocopy_send_client": false, 00:26:06.818 "zerocopy_threshold": 0, 00:26:06.818 "tls_version": 0, 00:26:06.818 "enable_ktls": false 00:26:06.818 } 00:26:06.818 } 00:26:06.818 ] 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "subsystem": "vmd", 00:26:06.818 "config": [] 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "subsystem": "accel", 00:26:06.818 "config": [ 00:26:06.818 { 00:26:06.818 "method": "accel_set_options", 00:26:06.818 "params": { 00:26:06.818 "small_cache_size": 128, 00:26:06.818 "large_cache_size": 16, 00:26:06.818 "task_count": 2048, 00:26:06.818 "sequence_count": 2048, 00:26:06.818 "buf_count": 2048 00:26:06.818 } 00:26:06.818 } 00:26:06.818 ] 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "subsystem": "bdev", 00:26:06.818 "config": [ 00:26:06.818 { 00:26:06.818 "method": "bdev_set_options", 00:26:06.818 "params": { 00:26:06.818 "bdev_io_pool_size": 65535, 00:26:06.818 "bdev_io_cache_size": 256, 00:26:06.818 "bdev_auto_examine": true, 00:26:06.818 "iobuf_small_cache_size": 128, 00:26:06.818 "iobuf_large_cache_size": 16 00:26:06.818 } 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "method": "bdev_raid_set_options", 00:26:06.818 "params": { 00:26:06.818 "process_window_size_kb": 1024, 00:26:06.818 "process_max_bandwidth_mb_sec": 0 00:26:06.818 } 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "method": "bdev_iscsi_set_options", 00:26:06.818 "params": { 00:26:06.818 "timeout_sec": 30 00:26:06.818 } 00:26:06.818 }, 00:26:06.818 { 00:26:06.818 "method": "bdev_nvme_set_options", 00:26:06.818 "params": { 00:26:06.818 "action_on_timeout": "none", 00:26:06.818 "timeout_us": 0, 00:26:06.818 "timeout_admin_us": 0, 00:26:06.818 "keep_alive_timeout_ms": 10000, 00:26:06.818 "arbitration_burst": 0, 00:26:06.818 "low_priority_weight": 0, 00:26:06.818 "medium_priority_weight": 0, 00:26:06.818 "high_priority_weight": 0, 00:26:06.818 "nvme_adminq_poll_period_us": 10000, 00:26:06.818 "nvme_ioq_poll_period_us": 0, 00:26:06.818 "io_queue_requests": 512, 00:26:06.818 "delay_cmd_submit": true, 00:26:06.818 "transport_retry_count": 4, 00:26:06.818 "bdev_retry_count": 3, 00:26:06.818 "transport_ack_timeout": 0, 00:26:06.818 "ctrlr_loss_timeout_sec": 0, 00:26:06.818 "reconnect_delay_sec": 0, 00:26:06.818 "fast_io_fail_timeout_sec": 0, 00:26:06.818 "disable_auto_failback": false, 00:26:06.818 "generate_uuids": false, 00:26:06.818 "transport_tos": 0, 00:26:06.818 "nvme_error_stat": false, 00:26:06.818 "rdma_srq_size": 0, 00:26:06.818 "io_path_stat": false, 00:26:06.818 "allow_accel_sequence": false, 00:26:06.818 "rdma_max_cq_size": 0, 00:26:06.818 "rdma_cm_event_timeout_ms": 0, 00:26:06.818 "dhchap_digests": [ 00:26:06.818 "sha256", 00:26:06.818 "sha384", 00:26:06.818 "sha512" 00:26:06.818 ], 00:26:06.818 "dhchap_dhgroups": [ 00:26:06.818 "null", 00:26:06.818 "ffdhe2048", 00:26:06.818 "ffdhe3072", 00:26:06.818 "ffdhe4096", 00:26:06.819 "ffdhe6144", 00:26:06.819 "ffdhe8192" 00:26:06.819 ] 00:26:06.819 } 00:26:06.819 }, 00:26:06.819 { 00:26:06.819 "method": "bdev_nvme_attach_controller", 00:26:06.819 "params": { 00:26:06.819 "name": "TLSTEST", 00:26:06.819 "trtype": "TCP", 00:26:06.819 "adrfam": "IPv4", 00:26:06.819 "traddr": "10.0.0.2", 00:26:06.819 "trsvcid": "4420", 00:26:06.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.819 "prchk_reftag": false, 00:26:06.819 "prchk_guard": false, 00:26:06.819 "ctrlr_loss_timeout_sec": 0, 00:26:06.819 "reconnect_delay_sec": 0, 00:26:06.819 "fast_io_fail_timeout_sec": 0, 00:26:06.819 "psk": "/tmp/tmp.n08PQwUmHc", 00:26:06.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.819 "hdgst": false, 00:26:06.819 "ddgst": false 00:26:06.819 } 00:26:06.819 }, 00:26:06.819 { 00:26:06.819 "method": "bdev_nvme_set_hotplug", 00:26:06.819 "params": { 00:26:06.819 "period_us": 100000, 00:26:06.819 "enable": false 00:26:06.819 } 00:26:06.819 }, 00:26:06.819 { 00:26:06.819 "method": "bdev_wait_for_examine" 00:26:06.819 } 00:26:06.819 ] 00:26:06.819 }, 00:26:06.819 { 00:26:06.819 "subsystem": "nbd", 00:26:06.819 "config": [] 00:26:06.819 } 00:26:06.819 ] 00:26:06.819 }' 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2044794 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2044794 ']' 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2044794 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2044794 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2044794' 00:26:06.819 killing process with pid 2044794 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2044794 00:26:06.819 Received shutdown signal, test time was about 10.000000 seconds 00:26:06.819 00:26:06.819 Latency(us) 00:26:06.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.819 =================================================================================================================== 00:26:06.819 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:06.819 [2024-07-22 10:42:12.441209] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:06.819 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2044794 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2044223 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2044223 ']' 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2044223 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2044223 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2044223' 00:26:07.080 killing process with pid 2044223 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2044223 00:26:07.080 [2024-07-22 10:42:12.599953] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2044223 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:07.080 10:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:26:07.080 "subsystems": [ 00:26:07.080 { 00:26:07.080 "subsystem": "keyring", 00:26:07.080 "config": [] 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "subsystem": "iobuf", 00:26:07.080 "config": [ 00:26:07.080 { 00:26:07.080 "method": "iobuf_set_options", 00:26:07.080 "params": { 00:26:07.080 "small_pool_count": 8192, 00:26:07.080 "large_pool_count": 1024, 00:26:07.080 "small_bufsize": 8192, 00:26:07.080 "large_bufsize": 135168 00:26:07.080 } 00:26:07.080 } 00:26:07.080 ] 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "subsystem": "sock", 00:26:07.080 "config": [ 00:26:07.080 { 00:26:07.080 "method": "sock_set_default_impl", 00:26:07.080 "params": { 00:26:07.080 "impl_name": "posix" 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "sock_impl_set_options", 00:26:07.080 "params": { 00:26:07.080 "impl_name": "ssl", 00:26:07.080 "recv_buf_size": 4096, 00:26:07.080 "send_buf_size": 4096, 00:26:07.080 "enable_recv_pipe": true, 00:26:07.080 "enable_quickack": false, 00:26:07.080 "enable_placement_id": 0, 00:26:07.080 "enable_zerocopy_send_server": true, 00:26:07.080 "enable_zerocopy_send_client": false, 00:26:07.080 "zerocopy_threshold": 0, 00:26:07.080 "tls_version": 0, 00:26:07.080 "enable_ktls": false 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "sock_impl_set_options", 00:26:07.080 "params": { 00:26:07.080 "impl_name": "posix", 00:26:07.080 "recv_buf_size": 2097152, 00:26:07.080 "send_buf_size": 2097152, 00:26:07.080 "enable_recv_pipe": true, 00:26:07.080 "enable_quickack": false, 00:26:07.080 "enable_placement_id": 0, 00:26:07.080 "enable_zerocopy_send_server": true, 00:26:07.080 "enable_zerocopy_send_client": false, 00:26:07.080 "zerocopy_threshold": 0, 00:26:07.080 "tls_version": 0, 00:26:07.080 "enable_ktls": false 00:26:07.080 } 00:26:07.080 } 00:26:07.080 ] 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "subsystem": "vmd", 00:26:07.080 "config": [] 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "subsystem": "accel", 00:26:07.080 "config": [ 00:26:07.080 { 00:26:07.080 "method": "accel_set_options", 00:26:07.080 "params": { 00:26:07.080 "small_cache_size": 128, 00:26:07.080 "large_cache_size": 16, 00:26:07.080 "task_count": 2048, 00:26:07.080 "sequence_count": 2048, 00:26:07.080 "buf_count": 2048 00:26:07.080 } 00:26:07.080 } 00:26:07.080 ] 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "subsystem": "bdev", 00:26:07.080 "config": [ 00:26:07.080 { 00:26:07.080 "method": "bdev_set_options", 00:26:07.080 "params": { 00:26:07.080 "bdev_io_pool_size": 65535, 00:26:07.080 "bdev_io_cache_size": 256, 00:26:07.080 "bdev_auto_examine": true, 00:26:07.080 "iobuf_small_cache_size": 128, 00:26:07.080 "iobuf_large_cache_size": 16 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "bdev_raid_set_options", 00:26:07.080 "params": { 00:26:07.080 "process_window_size_kb": 1024, 00:26:07.080 "process_max_bandwidth_mb_sec": 0 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "bdev_iscsi_set_options", 00:26:07.080 "params": { 00:26:07.080 "timeout_sec": 30 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "bdev_nvme_set_options", 00:26:07.080 "params": { 00:26:07.080 "action_on_timeout": "none", 00:26:07.080 "timeout_us": 0, 00:26:07.080 "timeout_admin_us": 0, 00:26:07.080 "keep_alive_timeout_ms": 10000, 00:26:07.080 "arbitration_burst": 0, 00:26:07.080 "low_priority_weight": 0, 00:26:07.080 "medium_priority_weight": 0, 00:26:07.080 "high_priority_weight": 0, 00:26:07.080 "nvme_adminq_poll_period_us": 10000, 00:26:07.080 "nvme_ioq_poll_period_us": 0, 00:26:07.080 "io_queue_requests": 0, 00:26:07.080 "delay_cmd_submit": true, 00:26:07.080 "transport_retry_count": 4, 00:26:07.080 "bdev_retry_count": 3, 00:26:07.080 "transport_ack_timeout": 0, 00:26:07.080 "ctrlr_loss_timeout_sec": 0, 00:26:07.080 "reconnect_delay_sec": 0, 00:26:07.080 "fast_io_fail_timeout_sec": 0, 00:26:07.080 "disable_auto_failback": false, 00:26:07.080 "generate_uuids": false, 00:26:07.080 "transport_tos": 0, 00:26:07.080 "nvme_error_stat": false, 00:26:07.080 "rdma_srq_size": 0, 00:26:07.080 "io_path_stat": false, 00:26:07.080 "allow_accel_sequence": false, 00:26:07.080 "rdma_max_cq_size": 0, 00:26:07.080 "rdma_cm_event_timeout_ms": 0, 00:26:07.080 "dhchap_digests": [ 00:26:07.080 "sha256", 00:26:07.080 "sha384", 00:26:07.080 "sha512" 00:26:07.080 ], 00:26:07.080 "dhchap_dhgroups": [ 00:26:07.080 "null", 00:26:07.080 "ffdhe2048", 00:26:07.080 "ffdhe3072", 00:26:07.080 "ffdhe4096", 00:26:07.080 "ffdhe6144", 00:26:07.080 "ffdhe8192" 00:26:07.080 ] 00:26:07.080 } 00:26:07.080 }, 00:26:07.080 { 00:26:07.080 "method": "bdev_nvme_set_hotplug", 00:26:07.080 "params": { 00:26:07.081 "period_us": 100000, 00:26:07.081 "enable": false 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "bdev_malloc_create", 00:26:07.081 "params": { 00:26:07.081 "name": "malloc0", 00:26:07.081 "num_blocks": 8192, 00:26:07.081 "block_size": 4096, 00:26:07.081 "physical_block_size": 4096, 00:26:07.081 "uuid": "463a8685-91fc-4bc0-9762-c90854160f3e", 00:26:07.081 "optimal_io_boundary": 0 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "bdev_wait_for_examine" 00:26:07.081 } 00:26:07.081 ] 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "subsystem": "nbd", 00:26:07.081 "config": [] 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "subsystem": "scheduler", 00:26:07.081 "config": [ 00:26:07.081 { 00:26:07.081 "method": "framework_set_scheduler", 00:26:07.081 "params": { 00:26:07.081 "name": "static" 00:26:07.081 } 00:26:07.081 } 00:26:07.081 ] 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "subsystem": "nvmf", 00:26:07.081 "config": [ 00:26:07.081 { 00:26:07.081 "method": "nvmf_set_config", 00:26:07.081 "params": { 00:26:07.081 "discovery_filter": "match_any", 00:26:07.081 "admin_cmd_passthru": { 00:26:07.081 "identify_ctrlr": false 00:26:07.081 } 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_set_max_subsystems", 00:26:07.081 "params": { 00:26:07.081 "max_subsystems": 1024 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_set_crdt", 00:26:07.081 "params": { 00:26:07.081 "crdt1": 0, 00:26:07.081 "crdt2": 0, 00:26:07.081 "crdt3": 0 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_create_transport", 00:26:07.081 "params": { 00:26:07.081 "trtype": "TCP", 00:26:07.081 "max_queue_depth": 128, 00:26:07.081 "max_io_qpairs_per_ctrlr": 127, 00:26:07.081 "in_capsule_data_size": 4096, 00:26:07.081 "max_io_size": 131072, 00:26:07.081 "io_unit_size": 131072, 00:26:07.081 "max_aq_depth": 128, 00:26:07.081 "num_shared_buffers": 511, 00:26:07.081 "buf_cache_size": 4294967295, 00:26:07.081 "dif_insert_or_strip": false, 00:26:07.081 "zcopy": false, 00:26:07.081 "c2h_success": false, 00:26:07.081 "sock_priority": 0, 00:26:07.081 "abort_timeout_sec": 1, 00:26:07.081 "ack_timeout": 0, 00:26:07.081 "data_wr_pool_size": 0 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_create_subsystem", 00:26:07.081 "params": { 00:26:07.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.081 "allow_any_host": false, 00:26:07.081 "serial_number": "SPDK00000000000001", 00:26:07.081 "model_number": "SPDK bdev Controller", 00:26:07.081 "max_namespaces": 10, 00:26:07.081 "min_cntlid": 1, 00:26:07.081 "max_cntlid": 65519, 00:26:07.081 "ana_reporting": false 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_subsystem_add_host", 00:26:07.081 "params": { 00:26:07.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.081 "host": "nqn.2016-06.io.spdk:host1", 00:26:07.081 "psk": "/tmp/tmp.n08PQwUmHc" 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_subsystem_add_ns", 00:26:07.081 "params": { 00:26:07.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.081 "namespace": { 00:26:07.081 "nsid": 1, 00:26:07.081 "bdev_name": "malloc0", 00:26:07.081 "nguid": "463A868591FC4BC09762C90854160F3E", 00:26:07.081 "uuid": "463a8685-91fc-4bc0-9762-c90854160f3e", 00:26:07.081 "no_auto_visible": false 00:26:07.081 } 00:26:07.081 } 00:26:07.081 }, 00:26:07.081 { 00:26:07.081 "method": "nvmf_subsystem_add_listener", 00:26:07.081 "params": { 00:26:07.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.081 "listen_address": { 00:26:07.081 "trtype": "TCP", 00:26:07.081 "adrfam": "IPv4", 00:26:07.081 "traddr": "10.0.0.2", 00:26:07.081 "trsvcid": "4420" 00:26:07.081 }, 00:26:07.081 "secure_channel": true 00:26:07.081 } 00:26:07.081 } 00:26:07.081 ] 00:26:07.081 } 00:26:07.081 ] 00:26:07.081 }' 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2045180 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2045180 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2045180 ']' 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.081 10:42:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:07.081 [2024-07-22 10:42:12.771349] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:07.081 [2024-07-22 10:42:12.771410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.342 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.342 [2024-07-22 10:42:12.856062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.342 [2024-07-22 10:42:12.884748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.342 [2024-07-22 10:42:12.884781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.342 [2024-07-22 10:42:12.884786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.342 [2024-07-22 10:42:12.884791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.342 [2024-07-22 10:42:12.884795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.342 [2024-07-22 10:42:12.884840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.601 [2024-07-22 10:42:13.062708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.601 [2024-07-22 10:42:13.087194] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:07.601 [2024-07-22 10:42:13.103234] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:07.601 [2024-07-22 10:42:13.103425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.861 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.861 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:07.861 10:42:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:07.861 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:07.861 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2045335 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2045335 /var/tmp/bdevperf.sock 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2045335 ']' 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.121 10:42:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.122 10:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:26:08.122 "subsystems": [ 00:26:08.122 { 00:26:08.122 "subsystem": "keyring", 00:26:08.122 "config": [] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "iobuf", 00:26:08.122 "config": [ 00:26:08.122 { 00:26:08.122 "method": "iobuf_set_options", 00:26:08.122 "params": { 00:26:08.122 "small_pool_count": 8192, 00:26:08.122 "large_pool_count": 1024, 00:26:08.122 "small_bufsize": 8192, 00:26:08.122 "large_bufsize": 135168 00:26:08.122 } 00:26:08.122 } 00:26:08.122 ] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "sock", 00:26:08.122 "config": [ 00:26:08.122 { 00:26:08.122 "method": "sock_set_default_impl", 00:26:08.122 "params": { 00:26:08.122 "impl_name": "posix" 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "sock_impl_set_options", 00:26:08.122 "params": { 00:26:08.122 "impl_name": "ssl", 00:26:08.122 "recv_buf_size": 4096, 00:26:08.122 "send_buf_size": 4096, 00:26:08.122 "enable_recv_pipe": true, 00:26:08.122 "enable_quickack": false, 00:26:08.122 "enable_placement_id": 0, 00:26:08.122 "enable_zerocopy_send_server": true, 00:26:08.122 "enable_zerocopy_send_client": false, 00:26:08.122 "zerocopy_threshold": 0, 00:26:08.122 "tls_version": 0, 00:26:08.122 "enable_ktls": false 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "sock_impl_set_options", 00:26:08.122 "params": { 00:26:08.122 "impl_name": "posix", 00:26:08.122 "recv_buf_size": 2097152, 00:26:08.122 "send_buf_size": 2097152, 00:26:08.122 "enable_recv_pipe": true, 00:26:08.122 "enable_quickack": false, 00:26:08.122 "enable_placement_id": 0, 00:26:08.122 "enable_zerocopy_send_server": true, 00:26:08.122 "enable_zerocopy_send_client": false, 00:26:08.122 "zerocopy_threshold": 0, 00:26:08.122 "tls_version": 0, 00:26:08.122 "enable_ktls": false 00:26:08.122 } 00:26:08.122 } 00:26:08.122 ] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "vmd", 00:26:08.122 "config": [] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "accel", 00:26:08.122 "config": [ 00:26:08.122 { 00:26:08.122 "method": "accel_set_options", 00:26:08.122 "params": { 00:26:08.122 "small_cache_size": 128, 00:26:08.122 "large_cache_size": 16, 00:26:08.122 "task_count": 2048, 00:26:08.122 "sequence_count": 2048, 00:26:08.122 "buf_count": 2048 00:26:08.122 } 00:26:08.122 } 00:26:08.122 ] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "bdev", 00:26:08.122 "config": [ 00:26:08.122 { 00:26:08.122 "method": "bdev_set_options", 00:26:08.122 "params": { 00:26:08.122 "bdev_io_pool_size": 65535, 00:26:08.122 "bdev_io_cache_size": 256, 00:26:08.122 "bdev_auto_examine": true, 00:26:08.122 "iobuf_small_cache_size": 128, 00:26:08.122 "iobuf_large_cache_size": 16 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_raid_set_options", 00:26:08.122 "params": { 00:26:08.122 "process_window_size_kb": 1024, 00:26:08.122 "process_max_bandwidth_mb_sec": 0 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_iscsi_set_options", 00:26:08.122 "params": { 00:26:08.122 "timeout_sec": 30 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_nvme_set_options", 00:26:08.122 "params": { 00:26:08.122 "action_on_timeout": "none", 00:26:08.122 "timeout_us": 0, 00:26:08.122 "timeout_admin_us": 0, 00:26:08.122 "keep_alive_timeout_ms": 10000, 00:26:08.122 "arbitration_burst": 0, 00:26:08.122 "low_priority_weight": 0, 00:26:08.122 "medium_priority_weight": 0, 00:26:08.122 "high_priority_weight": 0, 00:26:08.122 "nvme_adminq_poll_period_us": 10000, 00:26:08.122 "nvme_ioq_poll_period_us": 0, 00:26:08.122 "io_queue_requests": 512, 00:26:08.122 "delay_cmd_submit": true, 00:26:08.122 "transport_retry_count": 4, 00:26:08.122 "bdev_retry_count": 3, 00:26:08.122 "transport_ack_timeout": 0, 00:26:08.122 "ctrlr_loss_timeout_sec": 0, 00:26:08.122 "reconnect_delay_sec": 0, 00:26:08.122 "fast_io_fail_timeout_sec": 0, 00:26:08.122 "disable_auto_failback": false, 00:26:08.122 "generate_uuids": false, 00:26:08.122 "transport_tos": 0, 00:26:08.122 "nvme_error_stat": false, 00:26:08.122 "rdma_srq_size": 0, 00:26:08.122 "io_path_stat": false, 00:26:08.122 "allow_accel_sequence": false, 00:26:08.122 "rdma_max_cq_size": 0, 00:26:08.122 "rdma_cm_event_timeout_ms": 0, 00:26:08.122 "dhchap_digests": [ 00:26:08.122 "sha256", 00:26:08.122 "sha384", 00:26:08.122 "sha512" 00:26:08.122 ], 00:26:08.122 "dhchap_dhgroups": [ 00:26:08.122 "null", 00:26:08.122 "ffdhe2048", 00:26:08.122 "ffdhe3072", 00:26:08.122 "ffdhe4096", 00:26:08.122 "ffdhe6144", 00:26:08.122 "ffdhe8192" 00:26:08.122 ] 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_nvme_attach_controller", 00:26:08.122 "params": { 00:26:08.122 "name": "TLSTEST", 00:26:08.122 "trtype": "TCP", 00:26:08.122 "adrfam": "IPv4", 00:26:08.122 "traddr": "10.0.0.2", 00:26:08.122 "trsvcid": "4420", 00:26:08.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.122 "prchk_reftag": false, 00:26:08.122 "prchk_guard": false, 00:26:08.122 "ctrlr_loss_timeout_sec": 0, 00:26:08.122 "reconnect_delay_sec": 0, 00:26:08.122 "fast_io_fail_timeout_sec": 0, 00:26:08.122 "psk": "/tmp/tmp.n08PQwUmHc", 00:26:08.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.122 "hdgst": false, 00:26:08.122 "ddgst": false 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_nvme_set_hotplug", 00:26:08.122 "params": { 00:26:08.122 "period_us": 100000, 00:26:08.122 "enable": false 00:26:08.122 } 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "method": "bdev_wait_for_examine" 00:26:08.122 } 00:26:08.122 ] 00:26:08.122 }, 00:26:08.122 { 00:26:08.122 "subsystem": "nbd", 00:26:08.122 "config": [] 00:26:08.122 } 00:26:08.122 ] 00:26:08.122 }' 00:26:08.122 [2024-07-22 10:42:13.621616] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:08.122 [2024-07-22 10:42:13.621667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045335 ] 00:26:08.122 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.122 [2024-07-22 10:42:13.675423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.122 [2024-07-22 10:42:13.703582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.382 [2024-07-22 10:42:13.822835] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:08.383 [2024-07-22 10:42:13.822894] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:08.953 10:42:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.953 10:42:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:08.953 10:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:08.953 Running I/O for 10 seconds... 00:26:18.956 00:26:18.956 Latency(us) 00:26:18.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.956 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:18.956 Verification LBA range: start 0x0 length 0x2000 00:26:18.956 TLSTESTn1 : 10.02 4212.46 16.45 0.00 0.00 30345.59 4560.21 114469.55 00:26:18.956 =================================================================================================================== 00:26:18.956 Total : 4212.46 16.45 0.00 0.00 30345.59 4560.21 114469.55 00:26:18.956 0 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2045335 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2045335 ']' 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2045335 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2045335 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2045335' 00:26:18.956 killing process with pid 2045335 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2045335 00:26:18.956 Received shutdown signal, test time was about 10.000000 seconds 00:26:18.956 00:26:18.956 Latency(us) 00:26:18.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.956 =================================================================================================================== 00:26:18.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.956 [2024-07-22 10:42:24.568166] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:18.956 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2045335 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2045180 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2045180 ']' 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2045180 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2045180 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2045180' 00:26:19.266 killing process with pid 2045180 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2045180 00:26:19.266 [2024-07-22 10:42:24.729215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2045180 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2047510 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2047510 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2047510 ']' 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.266 10:42:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:19.266 [2024-07-22 10:42:24.909047] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:19.266 [2024-07-22 10:42:24.909100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.266 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.527 [2024-07-22 10:42:24.982206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.527 [2024-07-22 10:42:25.013351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.527 [2024-07-22 10:42:25.013392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.527 [2024-07-22 10:42:25.013404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.527 [2024-07-22 10:42:25.013411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.527 [2024-07-22 10:42:25.013416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.527 [2024-07-22 10:42:25.013440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.n08PQwUmHc 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n08PQwUmHc 00:26:20.097 10:42:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:20.357 [2024-07-22 10:42:25.842065] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.357 10:42:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:20.357 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:20.617 [2024-07-22 10:42:26.170876] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:20.617 [2024-07-22 10:42:26.171094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.617 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:20.876 malloc0 00:26:20.876 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:20.876 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n08PQwUmHc 00:26:21.136 [2024-07-22 10:42:26.642883] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2047915 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2047915 /var/tmp/bdevperf.sock 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2047915 ']' 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.136 10:42:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:21.136 [2024-07-22 10:42:26.712784] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:21.137 [2024-07-22 10:42:26.712835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047915 ] 00:26:21.137 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.137 [2024-07-22 10:42:26.791664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.137 [2024-07-22 10:42:26.820291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.077 10:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.077 10:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:22.077 10:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n08PQwUmHc 00:26:22.077 10:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:22.077 [2024-07-22 10:42:27.757345] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:22.338 nvme0n1 00:26:22.338 10:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:22.338 Running I/O for 1 seconds... 00:26:23.279 00:26:23.279 Latency(us) 00:26:23.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.279 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:23.279 Verification LBA range: start 0x0 length 0x2000 00:26:23.279 nvme0n1 : 1.02 5805.48 22.68 0.00 0.00 21839.84 6225.92 28835.84 00:26:23.279 =================================================================================================================== 00:26:23.279 Total : 5805.48 22.68 0.00 0.00 21839.84 6225.92 28835.84 00:26:23.279 0 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2047915 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2047915 ']' 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2047915 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.279 10:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047915 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047915' 00:26:23.539 killing process with pid 2047915 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2047915 00:26:23.539 Received shutdown signal, test time was about 1.000000 seconds 00:26:23.539 00:26:23.539 Latency(us) 00:26:23.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.539 =================================================================================================================== 00:26:23.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2047915 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2047510 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2047510 ']' 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2047510 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047510 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047510' 00:26:23.539 killing process with pid 2047510 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2047510 00:26:23.539 [2024-07-22 10:42:29.170337] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:23.539 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2047510 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2048388 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2048388 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2048388 ']' 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.799 10:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:23.799 [2024-07-22 10:42:29.351078] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:23.799 [2024-07-22 10:42:29.351128] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.799 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.799 [2024-07-22 10:42:29.421858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.799 [2024-07-22 10:42:29.450233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.799 [2024-07-22 10:42:29.450273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.799 [2024-07-22 10:42:29.450280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.799 [2024-07-22 10:42:29.450287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.799 [2024-07-22 10:42:29.450292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.799 [2024-07-22 10:42:29.450318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.739 [2024-07-22 10:42:30.178673] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.739 malloc0 00:26:24.739 [2024-07-22 10:42:30.205351] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:24.739 [2024-07-22 10:42:30.205576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2048670 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2048670 /var/tmp/bdevperf.sock 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2048670 ']' 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.739 10:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.739 [2024-07-22 10:42:30.280932] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:24.739 [2024-07-22 10:42:30.280978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048670 ] 00:26:24.739 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.739 [2024-07-22 10:42:30.361602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.739 [2024-07-22 10:42:30.390211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.676 10:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.676 10:42:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:25.676 10:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n08PQwUmHc 00:26:25.676 10:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:25.676 [2024-07-22 10:42:31.318969] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:25.936 nvme0n1 00:26:25.936 10:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:25.936 Running I/O for 1 seconds... 00:26:26.874 00:26:26.874 Latency(us) 00:26:26.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.874 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:26.874 Verification LBA range: start 0x0 length 0x2000 00:26:26.874 nvme0n1 : 1.02 5675.52 22.17 0.00 0.00 22363.83 6335.15 30583.47 00:26:26.874 =================================================================================================================== 00:26:26.874 Total : 5675.52 22.17 0.00 0.00 22363.83 6335.15 30583.47 00:26:26.874 0 00:26:26.874 10:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:26:26.874 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.874 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.134 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.134 10:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:26:27.134 "subsystems": [ 00:26:27.134 { 00:26:27.134 "subsystem": "keyring", 00:26:27.134 "config": [ 00:26:27.134 { 00:26:27.134 "method": "keyring_file_add_key", 00:26:27.134 "params": { 00:26:27.134 "name": "key0", 00:26:27.134 "path": "/tmp/tmp.n08PQwUmHc" 00:26:27.134 } 00:26:27.134 } 00:26:27.134 ] 00:26:27.134 }, 00:26:27.134 { 00:26:27.134 "subsystem": "iobuf", 00:26:27.134 "config": [ 00:26:27.134 { 00:26:27.134 "method": "iobuf_set_options", 00:26:27.134 "params": { 00:26:27.134 "small_pool_count": 8192, 00:26:27.134 "large_pool_count": 1024, 00:26:27.134 "small_bufsize": 8192, 00:26:27.134 "large_bufsize": 135168 00:26:27.134 } 00:26:27.134 } 00:26:27.134 ] 00:26:27.134 }, 00:26:27.134 { 00:26:27.134 "subsystem": "sock", 00:26:27.134 "config": [ 00:26:27.134 { 00:26:27.134 "method": "sock_set_default_impl", 00:26:27.134 "params": { 00:26:27.134 "impl_name": "posix" 00:26:27.134 } 00:26:27.134 }, 00:26:27.134 { 00:26:27.134 "method": "sock_impl_set_options", 00:26:27.134 "params": { 00:26:27.134 "impl_name": "ssl", 00:26:27.134 "recv_buf_size": 4096, 00:26:27.134 "send_buf_size": 4096, 00:26:27.134 "enable_recv_pipe": true, 00:26:27.134 "enable_quickack": false, 00:26:27.134 "enable_placement_id": 0, 00:26:27.134 "enable_zerocopy_send_server": true, 00:26:27.135 "enable_zerocopy_send_client": false, 00:26:27.135 "zerocopy_threshold": 0, 00:26:27.135 "tls_version": 0, 00:26:27.135 "enable_ktls": false 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "sock_impl_set_options", 00:26:27.135 "params": { 00:26:27.135 "impl_name": "posix", 00:26:27.135 "recv_buf_size": 2097152, 00:26:27.135 "send_buf_size": 2097152, 00:26:27.135 "enable_recv_pipe": true, 00:26:27.135 "enable_quickack": false, 00:26:27.135 "enable_placement_id": 0, 00:26:27.135 "enable_zerocopy_send_server": true, 00:26:27.135 "enable_zerocopy_send_client": false, 00:26:27.135 "zerocopy_threshold": 0, 00:26:27.135 "tls_version": 0, 00:26:27.135 "enable_ktls": false 00:26:27.135 } 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "vmd", 00:26:27.135 "config": [] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "accel", 00:26:27.135 "config": [ 00:26:27.135 { 00:26:27.135 "method": "accel_set_options", 00:26:27.135 "params": { 00:26:27.135 "small_cache_size": 128, 00:26:27.135 "large_cache_size": 16, 00:26:27.135 "task_count": 2048, 00:26:27.135 "sequence_count": 2048, 00:26:27.135 "buf_count": 2048 00:26:27.135 } 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "bdev", 00:26:27.135 "config": [ 00:26:27.135 { 00:26:27.135 "method": "bdev_set_options", 00:26:27.135 "params": { 00:26:27.135 "bdev_io_pool_size": 65535, 00:26:27.135 "bdev_io_cache_size": 256, 00:26:27.135 "bdev_auto_examine": true, 00:26:27.135 "iobuf_small_cache_size": 128, 00:26:27.135 "iobuf_large_cache_size": 16 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_raid_set_options", 00:26:27.135 "params": { 00:26:27.135 "process_window_size_kb": 1024, 00:26:27.135 "process_max_bandwidth_mb_sec": 0 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_iscsi_set_options", 00:26:27.135 "params": { 00:26:27.135 "timeout_sec": 30 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_nvme_set_options", 00:26:27.135 "params": { 00:26:27.135 "action_on_timeout": "none", 00:26:27.135 "timeout_us": 0, 00:26:27.135 "timeout_admin_us": 0, 00:26:27.135 "keep_alive_timeout_ms": 10000, 00:26:27.135 "arbitration_burst": 0, 00:26:27.135 "low_priority_weight": 0, 00:26:27.135 "medium_priority_weight": 0, 00:26:27.135 "high_priority_weight": 0, 00:26:27.135 "nvme_adminq_poll_period_us": 10000, 00:26:27.135 "nvme_ioq_poll_period_us": 0, 00:26:27.135 "io_queue_requests": 0, 00:26:27.135 "delay_cmd_submit": true, 00:26:27.135 "transport_retry_count": 4, 00:26:27.135 "bdev_retry_count": 3, 00:26:27.135 "transport_ack_timeout": 0, 00:26:27.135 "ctrlr_loss_timeout_sec": 0, 00:26:27.135 "reconnect_delay_sec": 0, 00:26:27.135 "fast_io_fail_timeout_sec": 0, 00:26:27.135 "disable_auto_failback": false, 00:26:27.135 "generate_uuids": false, 00:26:27.135 "transport_tos": 0, 00:26:27.135 "nvme_error_stat": false, 00:26:27.135 "rdma_srq_size": 0, 00:26:27.135 "io_path_stat": false, 00:26:27.135 "allow_accel_sequence": false, 00:26:27.135 "rdma_max_cq_size": 0, 00:26:27.135 "rdma_cm_event_timeout_ms": 0, 00:26:27.135 "dhchap_digests": [ 00:26:27.135 "sha256", 00:26:27.135 "sha384", 00:26:27.135 "sha512" 00:26:27.135 ], 00:26:27.135 "dhchap_dhgroups": [ 00:26:27.135 "null", 00:26:27.135 "ffdhe2048", 00:26:27.135 "ffdhe3072", 00:26:27.135 "ffdhe4096", 00:26:27.135 "ffdhe6144", 00:26:27.135 "ffdhe8192" 00:26:27.135 ] 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_nvme_set_hotplug", 00:26:27.135 "params": { 00:26:27.135 "period_us": 100000, 00:26:27.135 "enable": false 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_malloc_create", 00:26:27.135 "params": { 00:26:27.135 "name": "malloc0", 00:26:27.135 "num_blocks": 8192, 00:26:27.135 "block_size": 4096, 00:26:27.135 "physical_block_size": 4096, 00:26:27.135 "uuid": "f6ced2d7-0d90-45dd-a132-e3f7f80dddac", 00:26:27.135 "optimal_io_boundary": 0 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "bdev_wait_for_examine" 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "nbd", 00:26:27.135 "config": [] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "scheduler", 00:26:27.135 "config": [ 00:26:27.135 { 00:26:27.135 "method": "framework_set_scheduler", 00:26:27.135 "params": { 00:26:27.135 "name": "static" 00:26:27.135 } 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "subsystem": "nvmf", 00:26:27.135 "config": [ 00:26:27.135 { 00:26:27.135 "method": "nvmf_set_config", 00:26:27.135 "params": { 00:26:27.135 "discovery_filter": "match_any", 00:26:27.135 "admin_cmd_passthru": { 00:26:27.135 "identify_ctrlr": false 00:26:27.135 } 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_set_max_subsystems", 00:26:27.135 "params": { 00:26:27.135 "max_subsystems": 1024 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_set_crdt", 00:26:27.135 "params": { 00:26:27.135 "crdt1": 0, 00:26:27.135 "crdt2": 0, 00:26:27.135 "crdt3": 0 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_create_transport", 00:26:27.135 "params": { 00:26:27.135 "trtype": "TCP", 00:26:27.135 "max_queue_depth": 128, 00:26:27.135 "max_io_qpairs_per_ctrlr": 127, 00:26:27.135 "in_capsule_data_size": 4096, 00:26:27.135 "max_io_size": 131072, 00:26:27.135 "io_unit_size": 131072, 00:26:27.135 "max_aq_depth": 128, 00:26:27.135 "num_shared_buffers": 511, 00:26:27.135 "buf_cache_size": 4294967295, 00:26:27.135 "dif_insert_or_strip": false, 00:26:27.135 "zcopy": false, 00:26:27.135 "c2h_success": false, 00:26:27.135 "sock_priority": 0, 00:26:27.135 "abort_timeout_sec": 1, 00:26:27.135 "ack_timeout": 0, 00:26:27.135 "data_wr_pool_size": 0 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_create_subsystem", 00:26:27.135 "params": { 00:26:27.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.135 "allow_any_host": false, 00:26:27.135 "serial_number": "00000000000000000000", 00:26:27.135 "model_number": "SPDK bdev Controller", 00:26:27.135 "max_namespaces": 32, 00:26:27.135 "min_cntlid": 1, 00:26:27.135 "max_cntlid": 65519, 00:26:27.135 "ana_reporting": false 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_subsystem_add_host", 00:26:27.135 "params": { 00:26:27.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.135 "host": "nqn.2016-06.io.spdk:host1", 00:26:27.135 "psk": "key0" 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_subsystem_add_ns", 00:26:27.135 "params": { 00:26:27.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.135 "namespace": { 00:26:27.135 "nsid": 1, 00:26:27.135 "bdev_name": "malloc0", 00:26:27.135 "nguid": "F6CED2D70D9045DDA132E3F7F80DDDAC", 00:26:27.135 "uuid": "f6ced2d7-0d90-45dd-a132-e3f7f80dddac", 00:26:27.135 "no_auto_visible": false 00:26:27.135 } 00:26:27.135 } 00:26:27.135 }, 00:26:27.135 { 00:26:27.135 "method": "nvmf_subsystem_add_listener", 00:26:27.135 "params": { 00:26:27.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.135 "listen_address": { 00:26:27.135 "trtype": "TCP", 00:26:27.135 "adrfam": "IPv4", 00:26:27.135 "traddr": "10.0.0.2", 00:26:27.135 "trsvcid": "4420" 00:26:27.135 }, 00:26:27.135 "secure_channel": false, 00:26:27.135 "sock_impl": "ssl" 00:26:27.135 } 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 } 00:26:27.135 ] 00:26:27.135 }' 00:26:27.135 10:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:27.394 10:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:26:27.394 "subsystems": [ 00:26:27.394 { 00:26:27.394 "subsystem": "keyring", 00:26:27.394 "config": [ 00:26:27.394 { 00:26:27.394 "method": "keyring_file_add_key", 00:26:27.394 "params": { 00:26:27.394 "name": "key0", 00:26:27.394 "path": "/tmp/tmp.n08PQwUmHc" 00:26:27.394 } 00:26:27.394 } 00:26:27.394 ] 00:26:27.394 }, 00:26:27.394 { 00:26:27.394 "subsystem": "iobuf", 00:26:27.394 "config": [ 00:26:27.394 { 00:26:27.394 "method": "iobuf_set_options", 00:26:27.394 "params": { 00:26:27.394 "small_pool_count": 8192, 00:26:27.394 "large_pool_count": 1024, 00:26:27.394 "small_bufsize": 8192, 00:26:27.394 "large_bufsize": 135168 00:26:27.394 } 00:26:27.394 } 00:26:27.394 ] 00:26:27.394 }, 00:26:27.394 { 00:26:27.394 "subsystem": "sock", 00:26:27.394 "config": [ 00:26:27.394 { 00:26:27.394 "method": "sock_set_default_impl", 00:26:27.394 "params": { 00:26:27.394 "impl_name": "posix" 00:26:27.394 } 00:26:27.394 }, 00:26:27.394 { 00:26:27.394 "method": "sock_impl_set_options", 00:26:27.394 "params": { 00:26:27.394 "impl_name": "ssl", 00:26:27.394 "recv_buf_size": 4096, 00:26:27.394 "send_buf_size": 4096, 00:26:27.394 "enable_recv_pipe": true, 00:26:27.394 "enable_quickack": false, 00:26:27.394 "enable_placement_id": 0, 00:26:27.394 "enable_zerocopy_send_server": true, 00:26:27.394 "enable_zerocopy_send_client": false, 00:26:27.394 "zerocopy_threshold": 0, 00:26:27.394 "tls_version": 0, 00:26:27.394 "enable_ktls": false 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "sock_impl_set_options", 00:26:27.395 "params": { 00:26:27.395 "impl_name": "posix", 00:26:27.395 "recv_buf_size": 2097152, 00:26:27.395 "send_buf_size": 2097152, 00:26:27.395 "enable_recv_pipe": true, 00:26:27.395 "enable_quickack": false, 00:26:27.395 "enable_placement_id": 0, 00:26:27.395 "enable_zerocopy_send_server": true, 00:26:27.395 "enable_zerocopy_send_client": false, 00:26:27.395 "zerocopy_threshold": 0, 00:26:27.395 "tls_version": 0, 00:26:27.395 "enable_ktls": false 00:26:27.395 } 00:26:27.395 } 00:26:27.395 ] 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "subsystem": "vmd", 00:26:27.395 "config": [] 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "subsystem": "accel", 00:26:27.395 "config": [ 00:26:27.395 { 00:26:27.395 "method": "accel_set_options", 00:26:27.395 "params": { 00:26:27.395 "small_cache_size": 128, 00:26:27.395 "large_cache_size": 16, 00:26:27.395 "task_count": 2048, 00:26:27.395 "sequence_count": 2048, 00:26:27.395 "buf_count": 2048 00:26:27.395 } 00:26:27.395 } 00:26:27.395 ] 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "subsystem": "bdev", 00:26:27.395 "config": [ 00:26:27.395 { 00:26:27.395 "method": "bdev_set_options", 00:26:27.395 "params": { 00:26:27.395 "bdev_io_pool_size": 65535, 00:26:27.395 "bdev_io_cache_size": 256, 00:26:27.395 "bdev_auto_examine": true, 00:26:27.395 "iobuf_small_cache_size": 128, 00:26:27.395 "iobuf_large_cache_size": 16 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_raid_set_options", 00:26:27.395 "params": { 00:26:27.395 "process_window_size_kb": 1024, 00:26:27.395 "process_max_bandwidth_mb_sec": 0 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_iscsi_set_options", 00:26:27.395 "params": { 00:26:27.395 "timeout_sec": 30 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_nvme_set_options", 00:26:27.395 "params": { 00:26:27.395 "action_on_timeout": "none", 00:26:27.395 "timeout_us": 0, 00:26:27.395 "timeout_admin_us": 0, 00:26:27.395 "keep_alive_timeout_ms": 10000, 00:26:27.395 "arbitration_burst": 0, 00:26:27.395 "low_priority_weight": 0, 00:26:27.395 "medium_priority_weight": 0, 00:26:27.395 "high_priority_weight": 0, 00:26:27.395 "nvme_adminq_poll_period_us": 10000, 00:26:27.395 "nvme_ioq_poll_period_us": 0, 00:26:27.395 "io_queue_requests": 512, 00:26:27.395 "delay_cmd_submit": true, 00:26:27.395 "transport_retry_count": 4, 00:26:27.395 "bdev_retry_count": 3, 00:26:27.395 "transport_ack_timeout": 0, 00:26:27.395 "ctrlr_loss_timeout_sec": 0, 00:26:27.395 "reconnect_delay_sec": 0, 00:26:27.395 "fast_io_fail_timeout_sec": 0, 00:26:27.395 "disable_auto_failback": false, 00:26:27.395 "generate_uuids": false, 00:26:27.395 "transport_tos": 0, 00:26:27.395 "nvme_error_stat": false, 00:26:27.395 "rdma_srq_size": 0, 00:26:27.395 "io_path_stat": false, 00:26:27.395 "allow_accel_sequence": false, 00:26:27.395 "rdma_max_cq_size": 0, 00:26:27.395 "rdma_cm_event_timeout_ms": 0, 00:26:27.395 "dhchap_digests": [ 00:26:27.395 "sha256", 00:26:27.395 "sha384", 00:26:27.395 "sha512" 00:26:27.395 ], 00:26:27.395 "dhchap_dhgroups": [ 00:26:27.395 "null", 00:26:27.395 "ffdhe2048", 00:26:27.395 "ffdhe3072", 00:26:27.395 "ffdhe4096", 00:26:27.395 "ffdhe6144", 00:26:27.395 "ffdhe8192" 00:26:27.395 ] 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_nvme_attach_controller", 00:26:27.395 "params": { 00:26:27.395 "name": "nvme0", 00:26:27.395 "trtype": "TCP", 00:26:27.395 "adrfam": "IPv4", 00:26:27.395 "traddr": "10.0.0.2", 00:26:27.395 "trsvcid": "4420", 00:26:27.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.395 "prchk_reftag": false, 00:26:27.395 "prchk_guard": false, 00:26:27.395 "ctrlr_loss_timeout_sec": 0, 00:26:27.395 "reconnect_delay_sec": 0, 00:26:27.395 "fast_io_fail_timeout_sec": 0, 00:26:27.395 "psk": "key0", 00:26:27.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.395 "hdgst": false, 00:26:27.395 "ddgst": false 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_nvme_set_hotplug", 00:26:27.395 "params": { 00:26:27.395 "period_us": 100000, 00:26:27.395 "enable": false 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_enable_histogram", 00:26:27.395 "params": { 00:26:27.395 "name": "nvme0n1", 00:26:27.395 "enable": true 00:26:27.395 } 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "method": "bdev_wait_for_examine" 00:26:27.395 } 00:26:27.395 ] 00:26:27.395 }, 00:26:27.395 { 00:26:27.395 "subsystem": "nbd", 00:26:27.395 "config": [] 00:26:27.395 } 00:26:27.395 ] 00:26:27.395 }' 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2048670 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2048670 ']' 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2048670 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048670 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048670' 00:26:27.395 killing process with pid 2048670 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2048670 00:26:27.395 Received shutdown signal, test time was about 1.000000 seconds 00:26:27.395 00:26:27.395 Latency(us) 00:26:27.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.395 =================================================================================================================== 00:26:27.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.395 10:42:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2048670 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2048388 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2048388 ']' 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2048388 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:27.395 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048388 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048388' 00:26:27.654 killing process with pid 2048388 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2048388 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2048388 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.654 10:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:26:27.654 "subsystems": [ 00:26:27.654 { 00:26:27.654 "subsystem": "keyring", 00:26:27.654 "config": [ 00:26:27.654 { 00:26:27.654 "method": "keyring_file_add_key", 00:26:27.654 "params": { 00:26:27.654 "name": "key0", 00:26:27.654 "path": "/tmp/tmp.n08PQwUmHc" 00:26:27.654 } 00:26:27.654 } 00:26:27.654 ] 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "subsystem": "iobuf", 00:26:27.654 "config": [ 00:26:27.654 { 00:26:27.654 "method": "iobuf_set_options", 00:26:27.654 "params": { 00:26:27.654 "small_pool_count": 8192, 00:26:27.654 "large_pool_count": 1024, 00:26:27.654 "small_bufsize": 8192, 00:26:27.654 "large_bufsize": 135168 00:26:27.654 } 00:26:27.654 } 00:26:27.654 ] 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "subsystem": "sock", 00:26:27.654 "config": [ 00:26:27.654 { 00:26:27.654 "method": "sock_set_default_impl", 00:26:27.654 "params": { 00:26:27.654 "impl_name": "posix" 00:26:27.654 } 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "method": "sock_impl_set_options", 00:26:27.654 "params": { 00:26:27.654 "impl_name": "ssl", 00:26:27.654 "recv_buf_size": 4096, 00:26:27.654 "send_buf_size": 4096, 00:26:27.654 "enable_recv_pipe": true, 00:26:27.654 "enable_quickack": false, 00:26:27.654 "enable_placement_id": 0, 00:26:27.654 "enable_zerocopy_send_server": true, 00:26:27.654 "enable_zerocopy_send_client": false, 00:26:27.654 "zerocopy_threshold": 0, 00:26:27.654 "tls_version": 0, 00:26:27.654 "enable_ktls": false 00:26:27.654 } 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "method": "sock_impl_set_options", 00:26:27.654 "params": { 00:26:27.654 "impl_name": "posix", 00:26:27.654 "recv_buf_size": 2097152, 00:26:27.654 "send_buf_size": 2097152, 00:26:27.654 "enable_recv_pipe": true, 00:26:27.654 "enable_quickack": false, 00:26:27.654 "enable_placement_id": 0, 00:26:27.654 "enable_zerocopy_send_server": true, 00:26:27.654 "enable_zerocopy_send_client": false, 00:26:27.654 "zerocopy_threshold": 0, 00:26:27.654 "tls_version": 0, 00:26:27.654 "enable_ktls": false 00:26:27.654 } 00:26:27.654 } 00:26:27.654 ] 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "subsystem": "vmd", 00:26:27.654 "config": [] 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "subsystem": "accel", 00:26:27.654 "config": [ 00:26:27.654 { 00:26:27.654 "method": "accel_set_options", 00:26:27.654 "params": { 00:26:27.654 "small_cache_size": 128, 00:26:27.654 "large_cache_size": 16, 00:26:27.654 "task_count": 2048, 00:26:27.654 "sequence_count": 2048, 00:26:27.654 "buf_count": 2048 00:26:27.654 } 00:26:27.654 } 00:26:27.654 ] 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "subsystem": "bdev", 00:26:27.654 "config": [ 00:26:27.654 { 00:26:27.654 "method": "bdev_set_options", 00:26:27.654 "params": { 00:26:27.654 "bdev_io_pool_size": 65535, 00:26:27.654 "bdev_io_cache_size": 256, 00:26:27.654 "bdev_auto_examine": true, 00:26:27.654 "iobuf_small_cache_size": 128, 00:26:27.654 "iobuf_large_cache_size": 16 00:26:27.654 } 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "method": "bdev_raid_set_options", 00:26:27.654 "params": { 00:26:27.654 "process_window_size_kb": 1024, 00:26:27.654 "process_max_bandwidth_mb_sec": 0 00:26:27.654 } 00:26:27.654 }, 00:26:27.654 { 00:26:27.654 "method": "bdev_iscsi_set_options", 00:26:27.654 "params": { 00:26:27.654 "timeout_sec": 30 00:26:27.654 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "bdev_nvme_set_options", 00:26:27.655 "params": { 00:26:27.655 "action_on_timeout": "none", 00:26:27.655 "timeout_us": 0, 00:26:27.655 "timeout_admin_us": 0, 00:26:27.655 "keep_alive_timeout_ms": 10000, 00:26:27.655 "arbitration_burst": 0, 00:26:27.655 "low_priority_weight": 0, 00:26:27.655 "medium_priority_weight": 0, 00:26:27.655 "high_priority_weight": 0, 00:26:27.655 "nvme_adminq_poll_period_us": 10000, 00:26:27.655 "nvme_ioq_poll_period_us": 0, 00:26:27.655 "io_queue_requests": 0, 00:26:27.655 "delay_cmd_submit": true, 00:26:27.655 "transport_retry_count": 4, 00:26:27.655 "bdev_retry_count": 3, 00:26:27.655 "transport_ack_timeout": 0, 00:26:27.655 "ctrlr_loss_timeout_sec": 0, 00:26:27.655 "reconnect_delay_sec": 0, 00:26:27.655 "fast_io_fail_timeout_sec": 0, 00:26:27.655 "disable_auto_failback": false, 00:26:27.655 "generate_uuids": false, 00:26:27.655 "transport_tos": 0, 00:26:27.655 "nvme_error_stat": false, 00:26:27.655 "rdma_srq_size": 0, 00:26:27.655 "io_path_stat": false, 00:26:27.655 "allow_accel_sequence": false, 00:26:27.655 "rdma_max_cq_size": 0, 00:26:27.655 "rdma_cm_event_timeout_ms": 0, 00:26:27.655 "dhchap_digests": [ 00:26:27.655 "sha256", 00:26:27.655 "sha384", 00:26:27.655 "sha512" 00:26:27.655 ], 00:26:27.655 "dhchap_dhgroups": [ 00:26:27.655 "null", 00:26:27.655 "ffdhe2048", 00:26:27.655 "ffdhe3072", 00:26:27.655 "ffdhe4096", 00:26:27.655 "ffdhe6144", 00:26:27.655 "ffdhe8192" 00:26:27.655 ] 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "bdev_nvme_set_hotplug", 00:26:27.655 "params": { 00:26:27.655 "period_us": 100000, 00:26:27.655 "enable": false 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "bdev_malloc_create", 00:26:27.655 "params": { 00:26:27.655 "name": "malloc0", 00:26:27.655 "num_blocks": 8192, 00:26:27.655 "block_size": 4096, 00:26:27.655 "physical_block_size": 4096, 00:26:27.655 "uuid": "f6ced2d7-0d90-45dd-a132-e3f7f80dddac", 00:26:27.655 "optimal_io_boundary": 0 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "bdev_wait_for_examine" 00:26:27.655 } 00:26:27.655 ] 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "subsystem": "nbd", 00:26:27.655 "config": [] 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "subsystem": "scheduler", 00:26:27.655 "config": [ 00:26:27.655 { 00:26:27.655 "method": "framework_set_scheduler", 00:26:27.655 "params": { 00:26:27.655 "name": "static" 00:26:27.655 } 00:26:27.655 } 00:26:27.655 ] 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "subsystem": "nvmf", 00:26:27.655 "config": [ 00:26:27.655 { 00:26:27.655 "method": "nvmf_set_config", 00:26:27.655 "params": { 00:26:27.655 "discovery_filter": "match_any", 00:26:27.655 "admin_cmd_passthru": { 00:26:27.655 "identify_ctrlr": false 00:26:27.655 } 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_set_max_subsystems", 00:26:27.655 "params": { 00:26:27.655 "max_subsystems": 1024 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_set_crdt", 00:26:27.655 "params": { 00:26:27.655 "crdt1": 0, 00:26:27.655 "crdt2": 0, 00:26:27.655 "crdt3": 0 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_create_transport", 00:26:27.655 "params": { 00:26:27.655 "trtype": "TCP", 00:26:27.655 "max_queue_depth": 128, 00:26:27.655 "max_io_qpairs_per_ctrlr": 127, 00:26:27.655 "in_capsule_data_size": 4096, 00:26:27.655 "max_io_size": 131072, 00:26:27.655 "io_unit_size": 131072, 00:26:27.655 "max_aq_depth": 128, 00:26:27.655 "num_shared_buffers": 511, 00:26:27.655 "buf_cache_size": 4294967295, 00:26:27.655 "dif_insert_or_strip": false, 00:26:27.655 "zcopy": false, 00:26:27.655 "c2h_success": false, 00:26:27.655 "sock_priority": 0, 00:26:27.655 "abort_timeout_sec": 1, 00:26:27.655 "ack_timeout": 0, 00:26:27.655 "data_wr_pool_size": 0 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_create_subsystem", 00:26:27.655 "params": { 00:26:27.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.655 "allow_any_host": false, 00:26:27.655 "serial_number": "00000000000000000000", 00:26:27.655 "model_number": "SPDK bdev Controller", 00:26:27.655 "max_namespaces": 32, 00:26:27.655 "min_cntlid": 1, 00:26:27.655 "max_cntlid": 65519, 00:26:27.655 "ana_reporting": false 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_subsystem_add_host", 00:26:27.655 "params": { 00:26:27.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.655 "host": "nqn.2016-06.io.spdk:host1", 00:26:27.655 "psk": "key0" 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_subsystem_add_ns", 00:26:27.655 "params": { 00:26:27.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.655 "namespace": { 00:26:27.655 "nsid": 1, 00:26:27.655 "bdev_name": "malloc0", 00:26:27.655 "nguid": "F6CED2D70D9045DDA132E3F7F80DDDAC", 00:26:27.655 "uuid": "f6ced2d7-0d90-45dd-a132-e3f7f80dddac", 00:26:27.655 "no_auto_visible": false 00:26:27.655 } 00:26:27.655 } 00:26:27.655 }, 00:26:27.655 { 00:26:27.655 "method": "nvmf_subsystem_add_listener", 00:26:27.655 "params": { 00:26:27.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.655 "listen_address": { 00:26:27.655 "trtype": "TCP", 00:26:27.655 "adrfam": "IPv4", 00:26:27.655 "traddr": "10.0.0.2", 00:26:27.655 "trsvcid": "4420" 00:26:27.655 }, 00:26:27.655 "secure_channel": false, 00:26:27.655 "sock_impl": "ssl" 00:26:27.655 } 00:26:27.655 } 00:26:27.655 ] 00:26:27.655 } 00:26:27.655 ] 00:26:27.655 }' 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2049126 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2049126 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2049126 ']' 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.655 10:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.655 [2024-07-22 10:42:33.303094] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:27.655 [2024-07-22 10:42:33.303155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.655 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.914 [2024-07-22 10:42:33.374542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.915 [2024-07-22 10:42:33.406171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.915 [2024-07-22 10:42:33.406210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.915 [2024-07-22 10:42:33.406217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.915 [2024-07-22 10:42:33.406224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.915 [2024-07-22 10:42:33.406229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.915 [2024-07-22 10:42:33.406278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.915 [2024-07-22 10:42:33.597101] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.174 [2024-07-22 10:42:33.634424] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.174 [2024-07-22 10:42:33.634653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2049450 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2049450 /var/tmp/bdevperf.sock 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2049450 ']' 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.431 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.432 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:28.432 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.432 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.432 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:26:28.432 "subsystems": [ 00:26:28.432 { 00:26:28.432 "subsystem": "keyring", 00:26:28.432 "config": [ 00:26:28.432 { 00:26:28.432 "method": "keyring_file_add_key", 00:26:28.432 "params": { 00:26:28.432 "name": "key0", 00:26:28.432 "path": "/tmp/tmp.n08PQwUmHc" 00:26:28.432 } 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "iobuf", 00:26:28.432 "config": [ 00:26:28.432 { 00:26:28.432 "method": "iobuf_set_options", 00:26:28.432 "params": { 00:26:28.432 "small_pool_count": 8192, 00:26:28.432 "large_pool_count": 1024, 00:26:28.432 "small_bufsize": 8192, 00:26:28.432 "large_bufsize": 135168 00:26:28.432 } 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "sock", 00:26:28.432 "config": [ 00:26:28.432 { 00:26:28.432 "method": "sock_set_default_impl", 00:26:28.432 "params": { 00:26:28.432 "impl_name": "posix" 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "sock_impl_set_options", 00:26:28.432 "params": { 00:26:28.432 "impl_name": "ssl", 00:26:28.432 "recv_buf_size": 4096, 00:26:28.432 "send_buf_size": 4096, 00:26:28.432 "enable_recv_pipe": true, 00:26:28.432 "enable_quickack": false, 00:26:28.432 "enable_placement_id": 0, 00:26:28.432 "enable_zerocopy_send_server": true, 00:26:28.432 "enable_zerocopy_send_client": false, 00:26:28.432 "zerocopy_threshold": 0, 00:26:28.432 "tls_version": 0, 00:26:28.432 "enable_ktls": false 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "sock_impl_set_options", 00:26:28.432 "params": { 00:26:28.432 "impl_name": "posix", 00:26:28.432 "recv_buf_size": 2097152, 00:26:28.432 "send_buf_size": 2097152, 00:26:28.432 "enable_recv_pipe": true, 00:26:28.432 "enable_quickack": false, 00:26:28.432 "enable_placement_id": 0, 00:26:28.432 "enable_zerocopy_send_server": true, 00:26:28.432 "enable_zerocopy_send_client": false, 00:26:28.432 "zerocopy_threshold": 0, 00:26:28.432 "tls_version": 0, 00:26:28.432 "enable_ktls": false 00:26:28.432 } 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "vmd", 00:26:28.432 "config": [] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "accel", 00:26:28.432 "config": [ 00:26:28.432 { 00:26:28.432 "method": "accel_set_options", 00:26:28.432 "params": { 00:26:28.432 "small_cache_size": 128, 00:26:28.432 "large_cache_size": 16, 00:26:28.432 "task_count": 2048, 00:26:28.432 "sequence_count": 2048, 00:26:28.432 "buf_count": 2048 00:26:28.432 } 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "bdev", 00:26:28.432 "config": [ 00:26:28.432 { 00:26:28.432 "method": "bdev_set_options", 00:26:28.432 "params": { 00:26:28.432 "bdev_io_pool_size": 65535, 00:26:28.432 "bdev_io_cache_size": 256, 00:26:28.432 "bdev_auto_examine": true, 00:26:28.432 "iobuf_small_cache_size": 128, 00:26:28.432 "iobuf_large_cache_size": 16 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_raid_set_options", 00:26:28.432 "params": { 00:26:28.432 "process_window_size_kb": 1024, 00:26:28.432 "process_max_bandwidth_mb_sec": 0 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_iscsi_set_options", 00:26:28.432 "params": { 00:26:28.432 "timeout_sec": 30 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_nvme_set_options", 00:26:28.432 "params": { 00:26:28.432 "action_on_timeout": "none", 00:26:28.432 "timeout_us": 0, 00:26:28.432 "timeout_admin_us": 0, 00:26:28.432 "keep_alive_timeout_ms": 10000, 00:26:28.432 "arbitration_burst": 0, 00:26:28.432 "low_priority_weight": 0, 00:26:28.432 "medium_priority_weight": 0, 00:26:28.432 "high_priority_weight": 0, 00:26:28.432 "nvme_adminq_poll_period_us": 10000, 00:26:28.432 "nvme_ioq_poll_period_us": 0, 00:26:28.432 "io_queue_requests": 512, 00:26:28.432 "delay_cmd_submit": true, 00:26:28.432 "transport_retry_count": 4, 00:26:28.432 "bdev_retry_count": 3, 00:26:28.432 "transport_ack_timeout": 0, 00:26:28.432 "ctrlr_loss_timeout_sec": 0, 00:26:28.432 "reconnect_delay_sec": 0, 00:26:28.432 "fast_io_fail_timeout_sec": 0, 00:26:28.432 "disable_auto_failback": false, 00:26:28.432 "generate_uuids": false, 00:26:28.432 "transport_tos": 0, 00:26:28.432 "nvme_error_stat": false, 00:26:28.432 "rdma_srq_size": 0, 00:26:28.432 "io_path_stat": false, 00:26:28.432 "allow_accel_sequence": false, 00:26:28.432 "rdma_max_cq_size": 0, 00:26:28.432 "rdma_cm_event_timeout_ms": 0, 00:26:28.432 "dhchap_digests": [ 00:26:28.432 "sha256", 00:26:28.432 "sha384", 00:26:28.432 "sha512" 00:26:28.432 ], 00:26:28.432 "dhchap_dhgroups": [ 00:26:28.432 "null", 00:26:28.432 "ffdhe2048", 00:26:28.432 "ffdhe3072", 00:26:28.432 "ffdhe4096", 00:26:28.432 "ffdhe6144", 00:26:28.432 "ffdhe8192" 00:26:28.432 ] 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_nvme_attach_controller", 00:26:28.432 "params": { 00:26:28.432 "name": "nvme0", 00:26:28.432 "trtype": "TCP", 00:26:28.432 "adrfam": "IPv4", 00:26:28.432 "traddr": "10.0.0.2", 00:26:28.432 "trsvcid": "4420", 00:26:28.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.432 "prchk_reftag": false, 00:26:28.432 "prchk_guard": false, 00:26:28.432 "ctrlr_loss_timeout_sec": 0, 00:26:28.432 "reconnect_delay_sec": 0, 00:26:28.432 "fast_io_fail_timeout_sec": 0, 00:26:28.432 "psk": "key0", 00:26:28.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.432 "hdgst": false, 00:26:28.432 "ddgst": false 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_nvme_set_hotplug", 00:26:28.432 "params": { 00:26:28.432 "period_us": 100000, 00:26:28.432 "enable": false 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_enable_histogram", 00:26:28.432 "params": { 00:26:28.432 "name": "nvme0n1", 00:26:28.432 "enable": true 00:26:28.432 } 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "method": "bdev_wait_for_examine" 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }, 00:26:28.432 { 00:26:28.432 "subsystem": "nbd", 00:26:28.432 "config": [] 00:26:28.432 } 00:26:28.432 ] 00:26:28.432 }' 00:26:28.691 [2024-07-22 10:42:34.140001] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:28.691 [2024-07-22 10:42:34.140055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049450 ] 00:26:28.691 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.691 [2024-07-22 10:42:34.220523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.691 [2024-07-22 10:42:34.249081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.692 [2024-07-22 10:42:34.377562] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.261 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.261 10:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:29.261 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.261 10:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:26:29.521 10:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.521 10:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:29.521 Running I/O for 1 seconds... 00:26:30.554 00:26:30.554 Latency(us) 00:26:30.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.554 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:30.554 Verification LBA range: start 0x0 length 0x2000 00:26:30.554 nvme0n1 : 1.02 4766.89 18.62 0.00 0.00 26631.81 6034.77 38229.33 00:26:30.554 =================================================================================================================== 00:26:30.554 Total : 4766.89 18.62 0.00 0.00 26631.81 6034.77 38229.33 00:26:30.554 0 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:30.554 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:30.554 nvmf_trace.0 00:26:30.815 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2049450 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2049450 ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2049450 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049450 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049450' 00:26:30.816 killing process with pid 2049450 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2049450 00:26:30.816 Received shutdown signal, test time was about 1.000000 seconds 00:26:30.816 00:26:30.816 Latency(us) 00:26:30.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.816 =================================================================================================================== 00:26:30.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2049450 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.816 rmmod nvme_tcp 00:26:30.816 rmmod nvme_fabrics 00:26:30.816 rmmod nvme_keyring 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2049126 ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2049126 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2049126 ']' 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2049126 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:30.816 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049126 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049126' 00:26:31.077 killing process with pid 2049126 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2049126 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2049126 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.077 10:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.627 10:42:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.627 10:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TReI68Mgjt /tmp/tmp.kTA3Pa6KjJ /tmp/tmp.n08PQwUmHc 00:26:33.627 00:26:33.627 real 1m20.476s 00:26:33.627 user 1m59.542s 00:26:33.627 sys 0m27.145s 00:26:33.627 10:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.627 10:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.627 ************************************ 00:26:33.627 END TEST nvmf_tls 00:26:33.627 ************************************ 00:26:33.627 10:42:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:33.627 10:42:38 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:33.627 10:42:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:33.627 10:42:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.627 10:42:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.627 ************************************ 00:26:33.627 START TEST nvmf_fips 00:26:33.627 ************************************ 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:33.627 * Looking for test storage... 00:26:33.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:26:33.627 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:33.628 10:42:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:26:33.628 Error setting digest 00:26:33.628 00020F11107F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:33.628 00020F11107F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.628 10:42:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:41.770 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:41.770 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:41.770 Found net devices under 0000:31:00.0: cvl_0_0 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:41.770 Found net devices under 0000:31:00.1: cvl_0_1 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.770 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.771 10:42:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:26:41.771 00:26:41.771 --- 10.0.0.2 ping statistics --- 00:26:41.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.771 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:26:41.771 00:26:41.771 --- 10.0.0.1 ping statistics --- 00:26:41.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.771 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2054534 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2054534 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2054534 ']' 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.771 10:42:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:41.771 [2024-07-22 10:42:47.310354] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:41.771 [2024-07-22 10:42:47.310426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.771 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.771 [2024-07-22 10:42:47.400557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.771 [2024-07-22 10:42:47.430577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.771 [2024-07-22 10:42:47.430611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.771 [2024-07-22 10:42:47.430619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.771 [2024-07-22 10:42:47.430626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.771 [2024-07-22 10:42:47.430631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.771 [2024-07-22 10:42:47.430653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.713 [2024-07-22 10:42:48.303678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.713 [2024-07-22 10:42:48.319670] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:42.713 [2024-07-22 10:42:48.319966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.713 [2024-07-22 10:42:48.349703] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:42.713 malloc0 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2054863 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2054863 /var/tmp/bdevperf.sock 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2054863 ']' 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.713 10:42:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:42.973 [2024-07-22 10:42:48.450979] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:26:42.973 [2024-07-22 10:42:48.451053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054863 ] 00:26:42.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.973 [2024-07-22 10:42:48.511492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.973 [2024-07-22 10:42:48.548316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.541 10:42:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.541 10:42:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:43.541 10:42:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:43.801 [2024-07-22 10:42:49.327534] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:43.801 [2024-07-22 10:42:49.327597] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:43.801 TLSTESTn1 00:26:43.801 10:42:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:44.061 Running I/O for 10 seconds... 00:26:54.087 00:26:54.087 Latency(us) 00:26:54.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.087 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.087 Verification LBA range: start 0x0 length 0x2000 00:26:54.087 TLSTESTn1 : 10.04 3767.61 14.72 0.00 0.00 33900.52 6253.23 76895.57 00:26:54.087 =================================================================================================================== 00:26:54.087 Total : 3767.61 14.72 0.00 0.00 33900.52 6253.23 76895.57 00:26:54.087 0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:54.087 nvmf_trace.0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2054863 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2054863 ']' 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2054863 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2054863 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2054863' 00:26:54.087 killing process with pid 2054863 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2054863 00:26:54.087 Received shutdown signal, test time was about 10.000000 seconds 00:26:54.087 00:26:54.087 Latency(us) 00:26:54.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.087 =================================================================================================================== 00:26:54.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.087 [2024-07-22 10:42:59.735037] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:54.087 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2054863 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.348 rmmod nvme_tcp 00:26:54.348 rmmod nvme_fabrics 00:26:54.348 rmmod nvme_keyring 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2054534 ']' 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2054534 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2054534 ']' 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2054534 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2054534 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2054534' 00:26:54.348 killing process with pid 2054534 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2054534 00:26:54.348 [2024-07-22 10:42:59.937987] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:54.348 10:42:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2054534 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.610 10:43:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.529 10:43:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.529 10:43:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:56.529 00:26:56.529 real 0m23.291s 00:26:56.529 user 0m23.437s 00:26:56.529 sys 0m10.494s 00:26:56.529 10:43:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.529 10:43:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:56.529 ************************************ 00:26:56.529 END TEST nvmf_fips 00:26:56.529 ************************************ 00:26:56.529 10:43:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:56.529 10:43:02 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:26:56.529 10:43:02 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:56.529 10:43:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:56.529 10:43:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.529 10:43:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.529 ************************************ 00:26:56.529 START TEST nvmf_fuzz 00:26:56.529 ************************************ 00:26:56.529 10:43:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:56.790 * Looking for test storage... 00:26:56.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.790 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.791 10:43:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.943 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:04.944 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:04.944 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:04.944 Found net devices under 0000:31:00.0: cvl_0_0 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:04.944 Found net devices under 0000:31:00.1: cvl_0_1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:27:04.944 00:27:04.944 --- 10.0.0.2 ping statistics --- 00:27:04.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.944 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:27:04.944 00:27:04.944 --- 10.0.0.1 ping statistics --- 00:27:04.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.944 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2061620 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2061620 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2061620 ']' 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.944 10:43:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.517 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.777 Malloc0 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:05.777 10:43:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:37.903 Fuzzing completed. Shutting down the fuzz application 00:27:37.903 00:27:37.903 Dumping successful admin opcodes: 00:27:37.903 8, 9, 10, 24, 00:27:37.903 Dumping successful io opcodes: 00:27:37.903 0, 9, 00:27:37.903 NS: 0x200003aeff00 I/O qp, Total commands completed: 928959, total successful commands: 5412, random_seed: 3604761600 00:27:37.903 NS: 0x200003aeff00 admin qp, Total commands completed: 117337, total successful commands: 959, random_seed: 3205364992 00:27:37.903 10:43:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:37.903 Fuzzing completed. Shutting down the fuzz application 00:27:37.903 00:27:37.903 Dumping successful admin opcodes: 00:27:37.903 24, 00:27:37.903 Dumping successful io opcodes: 00:27:37.903 00:27:37.903 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1988918189 00:27:37.903 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1988989799 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.903 rmmod nvme_tcp 00:27:37.903 rmmod nvme_fabrics 00:27:37.903 rmmod nvme_keyring 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2061620 ']' 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2061620 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2061620 ']' 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2061620 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2061620 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2061620' 00:27:37.903 killing process with pid 2061620 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2061620 00:27:37.903 10:43:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2061620 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.903 10:43:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.814 10:43:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.814 10:43:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:39.814 00:27:39.814 real 0m42.965s 00:27:39.814 user 0m56.311s 00:27:39.814 sys 0m16.083s 00:27:39.814 10:43:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.814 10:43:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:39.814 ************************************ 00:27:39.814 END TEST nvmf_fuzz 00:27:39.814 ************************************ 00:27:39.814 10:43:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:39.814 10:43:45 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:39.814 10:43:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:39.814 10:43:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.814 10:43:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.814 ************************************ 00:27:39.814 START TEST nvmf_multiconnection 00:27:39.814 ************************************ 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:39.814 * Looking for test storage... 00:27:39.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.814 10:43:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:47.962 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:47.962 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:47.962 Found net devices under 0000:31:00.0: cvl_0_0 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:47.962 Found net devices under 0000:31:00.1: cvl_0_1 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.962 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.771 ms 00:27:47.963 00:27:47.963 --- 10.0.0.2 ping statistics --- 00:27:47.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.963 rtt min/avg/max/mdev = 0.771/0.771/0.771/0.000 ms 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:27:47.963 00:27:47.963 --- 10.0.0.1 ping statistics --- 00:27:47.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.963 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2072544 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2072544 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2072544 ']' 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.963 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.963 [2024-07-22 10:43:53.525075] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:27:47.963 [2024-07-22 10:43:53.525123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.963 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.963 [2024-07-22 10:43:53.591694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.963 [2024-07-22 10:43:53.626119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.963 [2024-07-22 10:43:53.626157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.963 [2024-07-22 10:43:53.626165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.963 [2024-07-22 10:43:53.626171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.963 [2024-07-22 10:43:53.626177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.963 [2024-07-22 10:43:53.626347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.963 [2024-07-22 10:43:53.626480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.963 [2024-07-22 10:43:53.626637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.963 [2024-07-22 10:43:53.626639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 [2024-07-22 10:43:53.762280] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 Malloc1 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 [2024-07-22 10:43:53.827177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 Malloc2 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.222 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.223 Malloc3 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.223 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 Malloc4 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 Malloc5 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 Malloc6 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 Malloc7 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.533 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 Malloc8 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 Malloc9 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 Malloc10 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 Malloc11 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.846 10:43:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:50.248 10:43:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:50.248 10:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:50.248 10:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:50.248 10:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:50.248 10:43:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:52.161 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:52.161 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:52.161 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:52.422 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:52.422 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:52.422 10:43:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:52.422 10:43:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:52.422 10:43:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:53.802 10:43:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:53.802 10:43:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:53.802 10:43:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:53.802 10:43:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:53.802 10:43:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.350 10:44:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:57.733 10:44:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:57.733 10:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.733 10:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:57.733 10:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:57.733 10:44:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.640 10:44:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:28:01.021 10:44:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:28:01.021 10:44:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:01.021 10:44:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:01.021 10:44:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:01.021 10:44:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.569 10:44:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:28:04.955 10:44:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:28:04.955 10:44:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:04.955 10:44:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:04.955 10:44:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:04.955 10:44:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:06.866 10:44:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:28:08.776 10:44:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:28:08.776 10:44:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:08.776 10:44:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:08.776 10:44:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:08.776 10:44:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:10.685 10:44:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:28:12.067 10:44:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:28:12.067 10:44:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:12.067 10:44:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:12.067 10:44:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:12.067 10:44:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:14.620 10:44:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:28:16.002 10:44:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:28:16.002 10:44:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:16.002 10:44:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:16.002 10:44:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:16.002 10:44:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:17.910 10:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:19.822 10:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:19.822 10:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:19.822 10:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:19.822 10:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:19.822 10:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:21.731 10:44:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:23.642 10:44:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:23.642 10:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:23.642 10:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:23.642 10:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:23.642 10:44:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.563 10:44:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:27.562 10:44:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:27.562 10:44:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:27.562 10:44:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:27.562 10:44:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:27.562 10:44:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:29.468 10:44:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:29.468 [global] 00:28:29.468 thread=1 00:28:29.468 invalidate=1 00:28:29.468 rw=read 00:28:29.468 time_based=1 00:28:29.468 runtime=10 00:28:29.468 ioengine=libaio 00:28:29.468 direct=1 00:28:29.468 bs=262144 00:28:29.468 iodepth=64 00:28:29.468 norandommap=1 00:28:29.468 numjobs=1 00:28:29.468 00:28:29.468 [job0] 00:28:29.468 filename=/dev/nvme0n1 00:28:29.468 [job1] 00:28:29.468 filename=/dev/nvme10n1 00:28:29.468 [job2] 00:28:29.468 filename=/dev/nvme1n1 00:28:29.468 [job3] 00:28:29.468 filename=/dev/nvme2n1 00:28:29.468 [job4] 00:28:29.468 filename=/dev/nvme3n1 00:28:29.468 [job5] 00:28:29.468 filename=/dev/nvme4n1 00:28:29.468 [job6] 00:28:29.468 filename=/dev/nvme5n1 00:28:29.468 [job7] 00:28:29.468 filename=/dev/nvme6n1 00:28:29.468 [job8] 00:28:29.468 filename=/dev/nvme7n1 00:28:29.468 [job9] 00:28:29.468 filename=/dev/nvme8n1 00:28:29.468 [job10] 00:28:29.468 filename=/dev/nvme9n1 00:28:29.727 Could not set queue depth (nvme0n1) 00:28:29.727 Could not set queue depth (nvme10n1) 00:28:29.727 Could not set queue depth (nvme1n1) 00:28:29.727 Could not set queue depth (nvme2n1) 00:28:29.727 Could not set queue depth (nvme3n1) 00:28:29.727 Could not set queue depth (nvme4n1) 00:28:29.727 Could not set queue depth (nvme5n1) 00:28:29.727 Could not set queue depth (nvme6n1) 00:28:29.727 Could not set queue depth (nvme7n1) 00:28:29.727 Could not set queue depth (nvme8n1) 00:28:29.727 Could not set queue depth (nvme9n1) 00:28:29.986 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.986 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.987 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.987 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:29.987 fio-3.35 00:28:29.987 Starting 11 threads 00:28:42.210 00:28:42.210 job0: (groupid=0, jobs=1): err= 0: pid=2080958: Mon Jul 22 10:44:46 2024 00:28:42.210 read: IOPS=872, BW=218MiB/s (229MB/s)(2196MiB/10062msec) 00:28:42.210 slat (usec): min=6, max=56239, avg=1012.90, stdev=3105.86 00:28:42.210 clat (usec): min=1929, max=178436, avg=72208.53, stdev=26644.95 00:28:42.210 lat (usec): min=1978, max=178469, avg=73221.44, stdev=27059.61 00:28:42.210 clat percentiles (msec): 00:28:42.210 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 50], 00:28:42.210 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 85], 00:28:42.210 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 110], 00:28:42.210 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 155], 00:28:42.210 | 99.99th=[ 180] 00:28:42.210 bw ( KiB/s): min=162816, max=350720, per=9.12%, avg=223232.00, stdev=57762.08, samples=20 00:28:42.210 iops : min= 636, max= 1370, avg=872.00, stdev=225.63, samples=20 00:28:42.210 lat (msec) : 2=0.01%, 4=0.17%, 10=1.39%, 20=3.04%, 50=15.60% 00:28:42.210 lat (msec) : 100=68.05%, 250=11.74% 00:28:42.210 cpu : usr=0.31%, sys=2.59%, ctx=2021, majf=0, minf=3535 00:28:42.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:42.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.210 issued rwts: total=8783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.210 job1: (groupid=0, jobs=1): err= 0: pid=2080959: Mon Jul 22 10:44:46 2024 00:28:42.210 read: IOPS=859, BW=215MiB/s (225MB/s)(2161MiB/10055msec) 00:28:42.210 slat (usec): min=5, max=113734, avg=1061.81, stdev=3745.67 00:28:42.210 clat (usec): min=1866, max=228008, avg=73290.82, stdev=33850.58 00:28:42.210 lat (usec): min=1914, max=228054, avg=74352.63, stdev=34368.51 00:28:42.210 clat percentiles (msec): 00:28:42.210 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 42], 00:28:42.210 | 30.00th=[ 50], 40.00th=[ 67], 50.00th=[ 80], 60.00th=[ 89], 00:28:42.210 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 112], 95.00th=[ 123], 00:28:42.210 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 176], 00:28:42.210 | 99.99th=[ 228] 00:28:42.210 bw ( KiB/s): min=152064, max=350720, per=8.98%, avg=219695.85, stdev=60967.76, samples=20 00:28:42.210 iops : min= 594, max= 1370, avg=858.15, stdev=238.15, samples=20 00:28:42.210 lat (msec) : 2=0.02%, 4=0.52%, 10=1.78%, 20=4.71%, 50=23.28% 00:28:42.210 lat (msec) : 100=46.43%, 250=23.26% 00:28:42.210 cpu : usr=0.38%, sys=2.42%, ctx=1930, majf=0, minf=4097 00:28:42.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:42.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.210 issued rwts: total=8644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.210 job2: (groupid=0, jobs=1): err= 0: pid=2080960: Mon Jul 22 10:44:46 2024 00:28:42.210 read: IOPS=749, BW=187MiB/s (196MB/s)(1884MiB/10054msec) 00:28:42.210 slat (usec): min=5, max=50924, avg=1187.83, stdev=3387.01 00:28:42.210 clat (usec): min=1862, max=161707, avg=84056.18, stdev=25219.05 00:28:42.210 lat (usec): min=1888, max=161732, avg=85244.00, stdev=25614.22 00:28:42.210 clat percentiles (msec): 00:28:42.210 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 52], 20.00th=[ 68], 00:28:42.210 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 94], 00:28:42.210 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 116], 00:28:42.210 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 155], 00:28:42.210 | 99.99th=[ 163] 00:28:42.210 bw ( KiB/s): min=136192, max=310272, per=7.82%, avg=191272.85, stdev=42967.12, samples=20 00:28:42.210 iops : min= 532, max= 1212, avg=747.15, stdev=167.85, samples=20 00:28:42.210 lat (msec) : 2=0.01%, 4=0.15%, 10=0.60%, 20=1.70%, 50=6.58% 00:28:42.210 lat (msec) : 100=63.25%, 250=27.71% 00:28:42.210 cpu : usr=0.27%, sys=2.22%, ctx=1784, majf=0, minf=4097 00:28:42.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:42.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.210 issued rwts: total=7534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.210 job3: (groupid=0, jobs=1): err= 0: pid=2080961: Mon Jul 22 10:44:46 2024 00:28:42.210 read: IOPS=806, BW=202MiB/s (211MB/s)(2031MiB/10069msec) 00:28:42.210 slat (usec): min=5, max=63536, avg=1126.02, stdev=3416.33 00:28:42.210 clat (msec): min=2, max=173, avg=78.08, stdev=27.82 00:28:42.210 lat (msec): min=2, max=173, avg=79.21, stdev=28.30 00:28:42.210 clat percentiles (msec): 00:28:42.210 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 38], 20.00th=[ 55], 00:28:42.210 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 91], 00:28:42.210 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 114], 00:28:42.210 | 99.00th=[ 127], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 165], 00:28:42.210 | 99.99th=[ 174] 00:28:42.210 bw ( KiB/s): min=146944, max=337920, per=8.43%, avg=206310.40, stdev=49853.93, samples=20 00:28:42.210 iops : min= 574, max= 1320, avg=805.90, stdev=194.74, samples=20 00:28:42.210 lat (msec) : 4=0.20%, 10=1.95%, 20=2.35%, 50=12.12%, 100=62.60% 00:28:42.210 lat (msec) : 250=20.80% 00:28:42.210 cpu : usr=0.37%, sys=2.57%, ctx=1864, majf=0, minf=4097 00:28:42.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:42.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.210 issued rwts: total=8122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job4: (groupid=0, jobs=1): err= 0: pid=2080962: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=907, BW=227MiB/s (238MB/s)(2282MiB/10062msec) 00:28:42.211 slat (usec): min=6, max=45154, avg=1092.09, stdev=2923.92 00:28:42.211 clat (msec): min=21, max=163, avg=69.32, stdev=26.66 00:28:42.211 lat (msec): min=21, max=163, avg=70.41, stdev=27.06 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 43], 00:28:42.211 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 79], 00:28:42.211 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 113], 00:28:42.211 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 157], 00:28:42.211 | 99.99th=[ 165] 00:28:42.211 bw ( KiB/s): min=147456, max=473088, per=9.49%, avg=232079.80, stdev=92916.87, samples=20 00:28:42.211 iops : min= 576, max= 1848, avg=906.55, stdev=362.97, samples=20 00:28:42.211 lat (msec) : 50=30.69%, 100=56.01%, 250=13.30% 00:28:42.211 cpu : usr=0.38%, sys=2.93%, ctx=1893, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=9128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job5: (groupid=0, jobs=1): err= 0: pid=2080963: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=937, BW=234MiB/s (246MB/s)(2356MiB/10051msec) 00:28:42.211 slat (usec): min=5, max=66469, avg=917.80, stdev=3161.99 00:28:42.211 clat (msec): min=2, max=187, avg=67.25, stdev=32.83 00:28:42.211 lat (msec): min=2, max=187, avg=68.17, stdev=33.32 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 34], 00:28:42.211 | 30.00th=[ 46], 40.00th=[ 58], 50.00th=[ 73], 60.00th=[ 83], 00:28:42.211 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 115], 00:28:42.211 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 157], 00:28:42.211 | 99.99th=[ 188] 00:28:42.211 bw ( KiB/s): min=158720, max=387584, per=9.79%, avg=239616.00, stdev=78272.66, samples=20 00:28:42.211 iops : min= 620, max= 1514, avg=936.00, stdev=305.75, samples=20 00:28:42.211 lat (msec) : 4=0.36%, 10=3.51%, 20=5.87%, 50=23.95%, 100=50.20% 00:28:42.211 lat (msec) : 250=16.11% 00:28:42.211 cpu : usr=0.41%, sys=2.68%, ctx=2208, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=9423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job6: (groupid=0, jobs=1): err= 0: pid=2080964: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=1082, BW=271MiB/s (284MB/s)(2722MiB/10059msec) 00:28:42.211 slat (usec): min=5, max=73583, avg=735.80, stdev=2859.94 00:28:42.211 clat (usec): min=1933, max=166059, avg=58353.39, stdev=31150.57 00:28:42.211 lat (usec): min=1984, max=206155, avg=59089.19, stdev=31613.31 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 27], 00:28:42.211 | 30.00th=[ 39], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 66], 00:28:42.211 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 109], 00:28:42.211 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:28:42.211 | 99.99th=[ 167] 00:28:42.211 bw ( KiB/s): min=155136, max=439808, per=11.33%, avg=277107.90, stdev=81047.52, samples=20 00:28:42.211 iops : min= 606, max= 1718, avg=1082.45, stdev=316.59, samples=20 00:28:42.211 lat (msec) : 2=0.01%, 4=0.47%, 10=3.46%, 20=8.27%, 50=27.79% 00:28:42.211 lat (msec) : 100=49.35%, 250=10.66% 00:28:42.211 cpu : usr=0.41%, sys=3.13%, ctx=2547, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=10886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job7: (groupid=0, jobs=1): err= 0: pid=2080965: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=846, BW=212MiB/s (222MB/s)(2130MiB/10059msec) 00:28:42.211 slat (usec): min=5, max=83453, avg=977.16, stdev=3834.46 00:28:42.211 clat (usec): min=1720, max=172397, avg=74518.00, stdev=36103.37 00:28:42.211 lat (usec): min=1766, max=172449, avg=75495.16, stdev=36688.83 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 15], 20.00th=[ 31], 00:28:42.211 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 94], 00:28:42.211 | 70.00th=[ 101], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 122], 00:28:42.211 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 163], 00:28:42.211 | 99.99th=[ 174] 00:28:42.211 bw ( KiB/s): min=146944, max=434688, per=8.85%, avg=216473.60, stdev=67872.78, samples=20 00:28:42.211 iops : min= 574, max= 1698, avg=845.60, stdev=265.13, samples=20 00:28:42.211 lat (msec) : 2=0.01%, 4=0.33%, 10=3.99%, 20=9.04%, 50=11.74% 00:28:42.211 lat (msec) : 100=45.42%, 250=29.48% 00:28:42.211 cpu : usr=0.31%, sys=2.64%, ctx=2060, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=8519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job8: (groupid=0, jobs=1): err= 0: pid=2080966: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=810, BW=203MiB/s (213MB/s)(2039MiB/10062msec) 00:28:42.211 slat (usec): min=6, max=61521, avg=990.16, stdev=2991.37 00:28:42.211 clat (msec): min=3, max=144, avg=77.87, stdev=23.41 00:28:42.211 lat (msec): min=3, max=151, avg=78.86, stdev=23.73 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 58], 00:28:42.211 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 88], 00:28:42.211 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 113], 00:28:42.211 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 144], 00:28:42.211 | 99.99th=[ 144] 00:28:42.211 bw ( KiB/s): min=160768, max=304128, per=8.47%, avg=207206.40, stdev=44653.70, samples=20 00:28:42.211 iops : min= 628, max= 1188, avg=809.40, stdev=174.43, samples=20 00:28:42.211 lat (msec) : 4=0.01%, 10=0.66%, 20=1.04%, 50=10.63%, 100=72.24% 00:28:42.211 lat (msec) : 250=15.41% 00:28:42.211 cpu : usr=0.34%, sys=2.37%, ctx=1994, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=8157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job9: (groupid=0, jobs=1): err= 0: pid=2080967: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=722, BW=181MiB/s (189MB/s)(1815MiB/10049msec) 00:28:42.211 slat (usec): min=6, max=85642, avg=1251.45, stdev=3615.45 00:28:42.211 clat (usec): min=1394, max=190722, avg=87218.46, stdev=24131.23 00:28:42.211 lat (usec): min=1460, max=190748, avg=88469.92, stdev=24551.86 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 50], 20.00th=[ 73], 00:28:42.211 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 95], 00:28:42.211 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 120], 00:28:42.211 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:28:42.211 | 99.99th=[ 190] 00:28:42.211 bw ( KiB/s): min=150016, max=313344, per=7.53%, avg=184243.20, stdev=35716.71, samples=20 00:28:42.211 iops : min= 586, max= 1224, avg=719.70, stdev=139.52, samples=20 00:28:42.211 lat (msec) : 2=0.10%, 4=0.21%, 10=0.43%, 20=1.07%, 50=8.37% 00:28:42.211 lat (msec) : 100=61.47%, 250=28.35% 00:28:42.211 cpu : usr=0.33%, sys=2.10%, ctx=1694, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=7260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 job10: (groupid=0, jobs=1): err= 0: pid=2080968: Mon Jul 22 10:44:46 2024 00:28:42.211 read: IOPS=971, BW=243MiB/s (255MB/s)(2444MiB/10060msec) 00:28:42.211 slat (usec): min=5, max=90872, avg=849.17, stdev=3322.46 00:28:42.211 clat (msec): min=2, max=208, avg=64.92, stdev=33.73 00:28:42.211 lat (msec): min=2, max=208, avg=65.77, stdev=34.19 00:28:42.211 clat percentiles (msec): 00:28:42.211 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 35], 00:28:42.211 | 30.00th=[ 44], 40.00th=[ 52], 50.00th=[ 62], 60.00th=[ 73], 00:28:42.211 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 121], 00:28:42.211 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 155], 00:28:42.211 | 99.99th=[ 209] 00:28:42.211 bw ( KiB/s): min=122880, max=407040, per=10.16%, avg=248601.60, stdev=89196.57, samples=20 00:28:42.211 iops : min= 480, max= 1590, avg=971.10, stdev=348.42, samples=20 00:28:42.211 lat (msec) : 4=0.32%, 10=2.99%, 20=6.51%, 50=28.23%, 100=43.24% 00:28:42.211 lat (msec) : 250=18.72% 00:28:42.211 cpu : usr=0.43%, sys=2.70%, ctx=2187, majf=0, minf=4097 00:28:42.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:42.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:42.211 issued rwts: total=9774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:42.211 00:28:42.211 Run status group 0 (all jobs): 00:28:42.211 READ: bw=2389MiB/s (2505MB/s), 181MiB/s-271MiB/s (189MB/s-284MB/s), io=23.5GiB (25.2GB), run=10049-10069msec 00:28:42.211 00:28:42.211 Disk stats (read/write): 00:28:42.211 nvme0n1: ios=17164/0, merge=0/0, ticks=1222540/0, in_queue=1222540, util=96.47% 00:28:42.211 nvme10n1: ios=16991/0, merge=0/0, ticks=1218437/0, in_queue=1218437, util=96.68% 00:28:42.211 nvme1n1: ios=14769/0, merge=0/0, ticks=1220345/0, in_queue=1220345, util=97.06% 00:28:42.211 nvme2n1: ios=15886/0, merge=0/0, ticks=1218260/0, in_queue=1218260, util=97.27% 00:28:42.211 nvme3n1: ios=17878/0, merge=0/0, ticks=1217592/0, in_queue=1217592, util=97.37% 00:28:42.211 nvme4n1: ios=18527/0, merge=0/0, ticks=1220964/0, in_queue=1220964, util=97.87% 00:28:42.211 nvme5n1: ios=21405/0, merge=0/0, ticks=1224010/0, in_queue=1224010, util=97.97% 00:28:42.211 nvme6n1: ios=16627/0, merge=0/0, ticks=1220223/0, in_queue=1220223, util=98.16% 00:28:42.211 nvme7n1: ios=15947/0, merge=0/0, ticks=1222159/0, in_queue=1222159, util=98.70% 00:28:42.211 nvme8n1: ios=14226/0, merge=0/0, ticks=1218029/0, in_queue=1218029, util=98.96% 00:28:42.212 nvme9n1: ios=19208/0, merge=0/0, ticks=1222561/0, in_queue=1222561, util=99.15% 00:28:42.212 10:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:42.212 [global] 00:28:42.212 thread=1 00:28:42.212 invalidate=1 00:28:42.212 rw=randwrite 00:28:42.212 time_based=1 00:28:42.212 runtime=10 00:28:42.212 ioengine=libaio 00:28:42.212 direct=1 00:28:42.212 bs=262144 00:28:42.212 iodepth=64 00:28:42.212 norandommap=1 00:28:42.212 numjobs=1 00:28:42.212 00:28:42.212 [job0] 00:28:42.212 filename=/dev/nvme0n1 00:28:42.212 [job1] 00:28:42.212 filename=/dev/nvme10n1 00:28:42.212 [job2] 00:28:42.212 filename=/dev/nvme1n1 00:28:42.212 [job3] 00:28:42.212 filename=/dev/nvme2n1 00:28:42.212 [job4] 00:28:42.212 filename=/dev/nvme3n1 00:28:42.212 [job5] 00:28:42.212 filename=/dev/nvme4n1 00:28:42.212 [job6] 00:28:42.212 filename=/dev/nvme5n1 00:28:42.212 [job7] 00:28:42.212 filename=/dev/nvme6n1 00:28:42.212 [job8] 00:28:42.212 filename=/dev/nvme7n1 00:28:42.212 [job9] 00:28:42.212 filename=/dev/nvme8n1 00:28:42.212 [job10] 00:28:42.212 filename=/dev/nvme9n1 00:28:42.212 Could not set queue depth (nvme0n1) 00:28:42.212 Could not set queue depth (nvme10n1) 00:28:42.212 Could not set queue depth (nvme1n1) 00:28:42.212 Could not set queue depth (nvme2n1) 00:28:42.212 Could not set queue depth (nvme3n1) 00:28:42.212 Could not set queue depth (nvme4n1) 00:28:42.212 Could not set queue depth (nvme5n1) 00:28:42.212 Could not set queue depth (nvme6n1) 00:28:42.212 Could not set queue depth (nvme7n1) 00:28:42.212 Could not set queue depth (nvme8n1) 00:28:42.212 Could not set queue depth (nvme9n1) 00:28:42.212 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:42.212 fio-3.35 00:28:42.212 Starting 11 threads 00:28:52.227 00:28:52.227 job0: (groupid=0, jobs=1): err= 0: pid=2083360: Mon Jul 22 10:44:57 2024 00:28:52.227 write: IOPS=615, BW=154MiB/s (161MB/s)(1554MiB/10095msec); 0 zone resets 00:28:52.227 slat (usec): min=19, max=11715, avg=1540.05, stdev=2761.84 00:28:52.227 clat (msec): min=11, max=191, avg=102.35, stdev=17.88 00:28:52.227 lat (msec): min=11, max=191, avg=103.89, stdev=18.03 00:28:52.227 clat percentiles (msec): 00:28:52.227 | 1.00th=[ 27], 5.00th=[ 61], 10.00th=[ 92], 20.00th=[ 97], 00:28:52.227 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 106], 00:28:52.227 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 120], 00:28:52.227 | 99.00th=[ 134], 99.50th=[ 142], 99.90th=[ 180], 99.95th=[ 186], 00:28:52.227 | 99.99th=[ 192] 00:28:52.227 bw ( KiB/s): min=133120, max=201728, per=8.39%, avg=157516.80, stdev=17382.84, samples=20 00:28:52.227 iops : min= 520, max= 788, avg=615.30, stdev=67.90, samples=20 00:28:52.227 lat (msec) : 20=0.77%, 50=1.59%, 100=32.16%, 250=65.48% 00:28:52.227 cpu : usr=1.38%, sys=1.82%, ctx=1869, majf=0, minf=1 00:28:52.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:52.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.227 issued rwts: total=0,6216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.227 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.227 job1: (groupid=0, jobs=1): err= 0: pid=2083391: Mon Jul 22 10:44:57 2024 00:28:52.227 write: IOPS=599, BW=150MiB/s (157MB/s)(1512MiB/10094msec); 0 zone resets 00:28:52.227 slat (usec): min=24, max=18869, avg=1607.59, stdev=2850.04 00:28:52.227 clat (msec): min=14, max=191, avg=105.18, stdev=16.76 00:28:52.227 lat (msec): min=14, max=191, avg=106.79, stdev=16.83 00:28:52.227 clat percentiles (msec): 00:28:52.227 | 1.00th=[ 41], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 96], 00:28:52.228 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 111], 00:28:52.228 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 129], 00:28:52.228 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 180], 99.95th=[ 186], 00:28:52.228 | 99.99th=[ 192] 00:28:52.228 bw ( KiB/s): min=131072, max=195072, per=8.16%, avg=153206.40, stdev=16963.06, samples=20 00:28:52.228 iops : min= 512, max= 762, avg=598.45, stdev=66.26, samples=20 00:28:52.228 lat (msec) : 20=0.13%, 50=1.26%, 100=29.29%, 250=69.32% 00:28:52.228 cpu : usr=1.47%, sys=1.88%, ctx=1682, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,6047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job2: (groupid=0, jobs=1): err= 0: pid=2083409: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=537, BW=134MiB/s (141MB/s)(1358MiB/10096msec); 0 zone resets 00:28:52.228 slat (usec): min=18, max=38047, avg=1784.65, stdev=3357.20 00:28:52.228 clat (msec): min=3, max=217, avg=117.14, stdev=32.67 00:28:52.228 lat (msec): min=3, max=217, avg=118.93, stdev=33.06 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 19], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 97], 00:28:52.228 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 128], 00:28:52.228 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 182], 00:28:52.228 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 218], 00:28:52.228 | 99.99th=[ 218] 00:28:52.228 bw ( KiB/s): min=86016, max=195584, per=7.32%, avg=137420.80, stdev=29499.47, samples=20 00:28:52.228 iops : min= 336, max= 764, avg=536.80, stdev=115.23, samples=20 00:28:52.228 lat (msec) : 4=0.04%, 10=0.28%, 20=0.85%, 50=1.80%, 100=22.43% 00:28:52.228 lat (msec) : 250=74.61% 00:28:52.228 cpu : usr=1.25%, sys=1.49%, ctx=1600, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,5431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job3: (groupid=0, jobs=1): err= 0: pid=2083415: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=609, BW=152MiB/s (160MB/s)(1540MiB/10100msec); 0 zone resets 00:28:52.228 slat (usec): min=17, max=29869, avg=1541.09, stdev=3059.72 00:28:52.228 clat (msec): min=3, max=204, avg=103.40, stdev=39.85 00:28:52.228 lat (msec): min=3, max=204, avg=104.94, stdev=40.42 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 62], 00:28:52.228 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 112], 00:28:52.228 | 70.00th=[ 129], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 169], 00:28:52.228 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 199], 00:28:52.228 | 99.99th=[ 205] 00:28:52.228 bw ( KiB/s): min=98304, max=327680, per=8.31%, avg=156032.00, stdev=58083.15, samples=20 00:28:52.228 iops : min= 384, max= 1280, avg=609.50, stdev=226.89, samples=20 00:28:52.228 lat (msec) : 4=0.05%, 10=1.17%, 20=1.92%, 50=7.28%, 100=23.40% 00:28:52.228 lat (msec) : 250=66.19% 00:28:52.228 cpu : usr=1.39%, sys=1.89%, ctx=2067, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,6158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job4: (groupid=0, jobs=1): err= 0: pid=2083416: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=625, BW=156MiB/s (164MB/s)(1581MiB/10102msec); 0 zone resets 00:28:52.228 slat (usec): min=20, max=19261, avg=1542.74, stdev=2915.52 00:28:52.228 clat (msec): min=3, max=204, avg=100.69, stdev=36.04 00:28:52.228 lat (msec): min=3, max=204, avg=102.23, stdev=36.53 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 17], 5.00th=[ 43], 10.00th=[ 50], 20.00th=[ 64], 00:28:52.228 | 30.00th=[ 81], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 108], 00:28:52.228 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 138], 95.00th=[ 159], 00:28:52.228 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 199], 00:28:52.228 | 99.99th=[ 205] 00:28:52.228 bw ( KiB/s): min=102400, max=321536, per=8.53%, avg=160230.40, stdev=57023.94, samples=20 00:28:52.228 iops : min= 400, max= 1256, avg=625.90, stdev=222.75, samples=20 00:28:52.228 lat (msec) : 4=0.13%, 10=0.38%, 20=0.70%, 50=8.98%, 100=34.01% 00:28:52.228 lat (msec) : 250=55.81% 00:28:52.228 cpu : usr=1.40%, sys=1.78%, ctx=1839, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,6322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job5: (groupid=0, jobs=1): err= 0: pid=2083417: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=668, BW=167MiB/s (175MB/s)(1685MiB/10075msec); 0 zone resets 00:28:52.228 slat (usec): min=20, max=26524, avg=1422.94, stdev=2578.49 00:28:52.228 clat (msec): min=4, max=149, avg=94.17, stdev=20.03 00:28:52.228 lat (msec): min=5, max=150, avg=95.59, stdev=20.21 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 33], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 80], 00:28:52.228 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 102], 00:28:52.228 | 70.00th=[ 106], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 125], 00:28:52.228 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 146], 00:28:52.228 | 99.99th=[ 150] 00:28:52.228 bw ( KiB/s): min=131072, max=219648, per=9.10%, avg=170925.15, stdev=29502.41, samples=20 00:28:52.228 iops : min= 512, max= 858, avg=667.65, stdev=115.22, samples=20 00:28:52.228 lat (msec) : 10=0.19%, 20=0.30%, 50=1.51%, 100=56.27%, 250=41.73% 00:28:52.228 cpu : usr=1.50%, sys=1.98%, ctx=1975, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,6739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job6: (groupid=0, jobs=1): err= 0: pid=2083418: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=550, BW=138MiB/s (144MB/s)(1390MiB/10102msec); 0 zone resets 00:28:52.228 slat (usec): min=26, max=25869, avg=1737.71, stdev=3198.53 00:28:52.228 clat (msec): min=16, max=207, avg=114.55, stdev=29.03 00:28:52.228 lat (msec): min=16, max=207, avg=116.28, stdev=29.31 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 68], 5.00th=[ 74], 10.00th=[ 79], 20.00th=[ 90], 00:28:52.228 | 30.00th=[ 100], 40.00th=[ 105], 50.00th=[ 110], 60.00th=[ 125], 00:28:52.228 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 153], 95.00th=[ 171], 00:28:52.228 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 207], 99.95th=[ 209], 00:28:52.228 | 99.99th=[ 209] 00:28:52.228 bw ( KiB/s): min=96256, max=210432, per=7.49%, avg=140688.35, stdev=33364.40, samples=20 00:28:52.228 iops : min= 376, max= 822, avg=549.55, stdev=130.32, samples=20 00:28:52.228 lat (msec) : 20=0.07%, 50=0.52%, 100=32.30%, 250=67.11% 00:28:52.228 cpu : usr=1.47%, sys=1.76%, ctx=1588, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,5558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.228 job7: (groupid=0, jobs=1): err= 0: pid=2083419: Mon Jul 22 10:44:57 2024 00:28:52.228 write: IOPS=616, BW=154MiB/s (162MB/s)(1556MiB/10094msec); 0 zone resets 00:28:52.228 slat (usec): min=22, max=12226, avg=1556.27, stdev=2755.81 00:28:52.228 clat (msec): min=11, max=191, avg=102.24, stdev=19.42 00:28:52.228 lat (msec): min=11, max=191, avg=103.79, stdev=19.55 00:28:52.228 clat percentiles (msec): 00:28:52.228 | 1.00th=[ 47], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 93], 00:28:52.228 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:28:52.228 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 136], 00:28:52.228 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 186], 00:28:52.228 | 99.99th=[ 192] 00:28:52.228 bw ( KiB/s): min=129024, max=207360, per=8.40%, avg=157690.55, stdev=21270.93, samples=20 00:28:52.228 iops : min= 504, max= 810, avg=615.95, stdev=83.03, samples=20 00:28:52.228 lat (msec) : 20=0.08%, 50=1.13%, 100=48.57%, 250=50.23% 00:28:52.228 cpu : usr=1.54%, sys=1.84%, ctx=1746, majf=0, minf=1 00:28:52.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.228 issued rwts: total=0,6222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.229 job8: (groupid=0, jobs=1): err= 0: pid=2083420: Mon Jul 22 10:44:57 2024 00:28:52.229 write: IOPS=856, BW=214MiB/s (225MB/s)(2153MiB/10055msec); 0 zone resets 00:28:52.229 slat (usec): min=21, max=25839, avg=1128.83, stdev=2223.12 00:28:52.229 clat (msec): min=2, max=140, avg=73.56, stdev=32.09 00:28:52.229 lat (msec): min=2, max=140, avg=74.69, stdev=32.54 00:28:52.229 clat percentiles (msec): 00:28:52.229 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 44], 00:28:52.229 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 92], 00:28:52.229 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 125], 00:28:52.229 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:28:52.229 | 99.99th=[ 142] 00:28:52.229 bw ( KiB/s): min=128512, max=368640, per=11.65%, avg=218906.85, stdev=95448.67, samples=20 00:28:52.229 iops : min= 502, max= 1440, avg=855.10, stdev=372.84, samples=20 00:28:52.229 lat (msec) : 4=0.07%, 10=0.13%, 20=0.39%, 50=33.55%, 100=35.46% 00:28:52.229 lat (msec) : 250=30.40% 00:28:52.229 cpu : usr=2.08%, sys=2.35%, ctx=2358, majf=0, minf=1 00:28:52.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:52.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.229 issued rwts: total=0,8613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.229 job9: (groupid=0, jobs=1): err= 0: pid=2083421: Mon Jul 22 10:44:57 2024 00:28:52.229 write: IOPS=702, BW=176MiB/s (184MB/s)(1769MiB/10076msec); 0 zone resets 00:28:52.229 slat (usec): min=16, max=18559, avg=1341.17, stdev=2523.67 00:28:52.229 clat (msec): min=3, max=176, avg=89.79, stdev=26.42 00:28:52.229 lat (msec): min=4, max=176, avg=91.13, stdev=26.75 00:28:52.229 clat percentiles (msec): 00:28:52.229 | 1.00th=[ 27], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 75], 00:28:52.229 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 85], 00:28:52.229 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 157], 00:28:52.229 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 171], 99.95th=[ 174], 00:28:52.229 | 99.99th=[ 178] 00:28:52.229 bw ( KiB/s): min=100352, max=262144, per=9.56%, avg=179497.50, stdev=40427.86, samples=20 00:28:52.229 iops : min= 392, max= 1024, avg=701.15, stdev=157.93, samples=20 00:28:52.229 lat (msec) : 4=0.01%, 10=0.20%, 20=0.45%, 50=1.65%, 100=72.86% 00:28:52.229 lat (msec) : 250=24.82% 00:28:52.229 cpu : usr=1.63%, sys=1.91%, ctx=2112, majf=0, minf=1 00:28:52.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:52.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.229 issued rwts: total=0,7074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.229 job10: (groupid=0, jobs=1): err= 0: pid=2083422: Mon Jul 22 10:44:57 2024 00:28:52.229 write: IOPS=966, BW=242MiB/s (253MB/s)(2435MiB/10074msec); 0 zone resets 00:28:52.229 slat (usec): min=13, max=93094, avg=1021.98, stdev=2254.48 00:28:52.229 clat (msec): min=31, max=194, avg=65.14, stdev=24.76 00:28:52.229 lat (msec): min=31, max=194, avg=66.16, stdev=25.07 00:28:52.229 clat percentiles (msec): 00:28:52.229 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 41], 00:28:52.229 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 63], 60.00th=[ 79], 00:28:52.229 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:28:52.229 | 99.00th=[ 127], 99.50th=[ 142], 99.90th=[ 182], 99.95th=[ 188], 00:28:52.229 | 99.99th=[ 194] 00:28:52.229 bw ( KiB/s): min=133386, max=445440, per=13.19%, avg=247718.90, stdev=97316.79, samples=20 00:28:52.229 iops : min= 521, max= 1740, avg=967.65, stdev=380.15, samples=20 00:28:52.229 lat (msec) : 50=40.36%, 100=50.98%, 250=8.66% 00:28:52.229 cpu : usr=2.00%, sys=3.19%, ctx=2412, majf=0, minf=1 00:28:52.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:52.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:52.229 issued rwts: total=0,9739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.229 00:28:52.229 Run status group 0 (all jobs): 00:28:52.229 WRITE: bw=1834MiB/s (1923MB/s), 134MiB/s-242MiB/s (141MB/s-253MB/s), io=18.1GiB (19.4GB), run=10055-10102msec 00:28:52.229 00:28:52.229 Disk stats (read/write): 00:28:52.229 nvme0n1: ios=47/12111, merge=0/0, ticks=1157/1200048, in_queue=1201205, util=100.00% 00:28:52.229 nvme10n1: ios=45/12093, merge=0/0, ticks=100/1228601, in_queue=1228701, util=97.20% 00:28:52.229 nvme1n1: ios=41/10857, merge=0/0, ticks=226/1229840, in_queue=1230066, util=98.59% 00:28:52.229 nvme2n1: ios=0/12307, merge=0/0, ticks=0/1230445, in_queue=1230445, util=97.24% 00:28:52.229 nvme3n1: ios=0/12631, merge=0/0, ticks=0/1229800, in_queue=1229800, util=97.35% 00:28:52.229 nvme4n1: ios=44/13100, merge=0/0, ticks=1274/1199744, in_queue=1201018, util=100.00% 00:28:52.229 nvme5n1: ios=0/11104, merge=0/0, ticks=0/1229315, in_queue=1229315, util=97.99% 00:28:52.229 nvme6n1: ios=0/12443, merge=0/0, ticks=0/1229198, in_queue=1229198, util=98.15% 00:28:52.229 nvme7n1: ios=0/16748, merge=0/0, ticks=0/1201273, in_queue=1201273, util=98.63% 00:28:52.229 nvme8n1: ios=0/13771, merge=0/0, ticks=0/1201910, in_queue=1201910, util=98.88% 00:28:52.229 nvme9n1: ios=43/19103, merge=0/0, ticks=1219/1191444, in_queue=1192663, util=99.89% 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:52.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:52.229 10:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:52.489 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:52.489 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:52.772 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:52.772 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:53.032 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.032 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:53.292 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.292 10:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:53.551 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.551 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:53.811 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:53.811 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:53.811 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:54.072 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:54.072 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.072 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:54.331 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:54.331 rmmod nvme_tcp 00:28:54.331 rmmod nvme_fabrics 00:28:54.331 rmmod nvme_keyring 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2072544 ']' 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2072544 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2072544 ']' 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2072544 00:28:54.331 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:28:54.332 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:54.332 10:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2072544 00:28:54.591 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:54.591 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:54.591 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2072544' 00:28:54.591 killing process with pid 2072544 00:28:54.591 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2072544 00:28:54.591 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2072544 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.851 10:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.764 10:45:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:56.764 00:28:56.764 real 1m17.133s 00:28:56.764 user 4m49.340s 00:28:56.764 sys 0m22.530s 00:28:56.764 10:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:56.764 10:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:56.764 ************************************ 00:28:56.764 END TEST nvmf_multiconnection 00:28:56.764 ************************************ 00:28:56.764 10:45:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:56.764 10:45:02 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:56.764 10:45:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:56.764 10:45:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.764 10:45:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:57.025 ************************************ 00:28:57.025 START TEST nvmf_initiator_timeout 00:28:57.025 ************************************ 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:57.025 * Looking for test storage... 00:28:57.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.025 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:28:57.026 10:45:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:05.160 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.160 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:05.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:05.161 Found net devices under 0000:31:00.0: cvl_0_0 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:05.161 Found net devices under 0000:31:00.1: cvl_0_1 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:05.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:29:05.161 00:29:05.161 --- 10.0.0.2 ping statistics --- 00:29:05.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.161 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:29:05.161 00:29:05.161 --- 10.0.0.1 ping statistics --- 00:29:05.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.161 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2090938 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2090938 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2090938 ']' 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.161 10:45:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:05.161 [2024-07-22 10:45:10.848185] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:29:05.161 [2024-07-22 10:45:10.848247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.421 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.421 [2024-07-22 10:45:10.927000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.421 [2024-07-22 10:45:10.967435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.421 [2024-07-22 10:45:10.967484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.421 [2024-07-22 10:45:10.967493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.421 [2024-07-22 10:45:10.967500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.421 [2024-07-22 10:45:10.967505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.421 [2024-07-22 10:45:10.967715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.421 [2024-07-22 10:45:10.967834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.421 [2024-07-22 10:45:10.967992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.421 [2024-07-22 10:45:10.967993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.141 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 Malloc0 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 Delay0 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 [2024-07-22 10:45:11.704259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 [2024-07-22 10:45:11.744560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.142 10:45:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:08.047 10:45:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:29:08.047 10:45:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:29:08.047 10:45:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:08.047 10:45:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:08.047 10:45:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2091979 00:29:09.952 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:29:09.953 10:45:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:29:09.953 [global] 00:29:09.953 thread=1 00:29:09.953 invalidate=1 00:29:09.953 rw=write 00:29:09.953 time_based=1 00:29:09.953 runtime=60 00:29:09.953 ioengine=libaio 00:29:09.953 direct=1 00:29:09.953 bs=4096 00:29:09.953 iodepth=1 00:29:09.953 norandommap=0 00:29:09.953 numjobs=1 00:29:09.953 00:29:09.953 verify_dump=1 00:29:09.953 verify_backlog=512 00:29:09.953 verify_state_save=0 00:29:09.953 do_verify=1 00:29:09.953 verify=crc32c-intel 00:29:09.953 [job0] 00:29:09.953 filename=/dev/nvme0n1 00:29:09.953 Could not set queue depth (nvme0n1) 00:29:10.212 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:10.212 fio-3.35 00:29:10.212 Starting 1 thread 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:12.757 true 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:12.757 true 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:12.757 true 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:12.757 true 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.757 10:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:16.055 true 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:16.055 true 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:16.055 true 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:16.055 true 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:16.055 10:45:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2091979 00:30:12.311 00:30:12.311 job0: (groupid=0, jobs=1): err= 0: pid=2092144: Mon Jul 22 10:46:15 2024 00:30:12.311 read: IOPS=183, BW=735KiB/s (753kB/s)(43.1MiB/60001msec) 00:30:12.311 slat (nsec): min=6726, max=82423, avg=24897.69, stdev=5153.48 00:30:12.311 clat (usec): min=314, max=41909k, avg=4854.89, stdev=399136.03 00:30:12.311 lat (usec): min=321, max=41909k, avg=4879.79, stdev=399136.03 00:30:12.311 clat percentiles (usec): 00:30:12.311 | 1.00th=[ 553], 5.00th=[ 668], 10.00th=[ 750], 20.00th=[ 791], 00:30:12.311 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:30:12.311 | 70.00th=[ 930], 80.00th=[ 1004], 90.00th=[ 1090], 95.00th=[ 1123], 00:30:12.311 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[42206], 99.95th=[42206], 00:30:12.311 | 99.99th=[42730] 00:30:12.311 write: IOPS=187, BW=751KiB/s (769kB/s)(44.0MiB/60001msec); 0 zone resets 00:30:12.311 slat (usec): min=9, max=31020, avg=34.33, stdev=334.96 00:30:12.311 clat (usec): min=176, max=901, avg=500.05, stdev=121.88 00:30:12.311 lat (usec): min=186, max=31664, avg=534.37, stdev=359.57 00:30:12.311 clat percentiles (usec): 00:30:12.311 | 1.00th=[ 245], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 388], 00:30:12.311 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[ 529], 00:30:12.311 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 717], 00:30:12.311 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 840], 00:30:12.311 | 99.99th=[ 889] 00:30:12.311 bw ( KiB/s): min= 176, max= 4096, per=100.00%, avg=2709.00, stdev=1386.99, samples=32 00:30:12.311 iops : min= 44, max= 1024, avg=677.25, stdev=346.75, samples=32 00:30:12.311 lat (usec) : 250=0.72%, 500=26.01%, 750=27.89%, 1000=35.21% 00:30:12.311 lat (msec) : 2=9.96%, 50=0.20%, >=2000=0.01% 00:30:12.311 cpu : usr=0.56%, sys=1.07%, ctx=22296, majf=0, minf=1 00:30:12.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:12.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.311 issued rwts: total=11025,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:12.311 00:30:12.311 Run status group 0 (all jobs): 00:30:12.311 READ: bw=735KiB/s (753kB/s), 735KiB/s-735KiB/s (753kB/s-753kB/s), io=43.1MiB (45.2MB), run=60001-60001msec 00:30:12.311 WRITE: bw=751KiB/s (769kB/s), 751KiB/s-751KiB/s (769kB/s-769kB/s), io=44.0MiB (46.1MB), run=60001-60001msec 00:30:12.311 00:30:12.311 Disk stats (read/write): 00:30:12.311 nvme0n1: ios=10962/11264, merge=0/0, ticks=12036/5536, in_queue=17572, util=99.87% 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:12.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:12.311 nvmf hotplug test: fio successful as expected 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:12.311 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:12.311 rmmod nvme_tcp 00:30:12.311 rmmod nvme_fabrics 00:30:12.311 rmmod nvme_keyring 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2090938 ']' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2090938 ']' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2090938' 00:30:12.311 killing process with pid 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2090938 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.311 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.881 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.881 00:30:12.881 real 1m15.857s 00:30:12.881 user 4m36.480s 00:30:12.882 sys 0m8.499s 00:30:12.882 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.882 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.882 ************************************ 00:30:12.882 END TEST nvmf_initiator_timeout 00:30:12.882 ************************************ 00:30:12.882 10:46:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:12.882 10:46:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:30:12.882 10:46:18 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:30:12.882 10:46:18 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:30:12.882 10:46:18 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.882 10:46:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.028 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.028 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.028 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.028 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:30:21.028 10:46:26 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:21.028 10:46:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:21.028 10:46:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.028 10:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 ************************************ 00:30:21.028 START TEST nvmf_perf_adq 00:30:21.028 ************************************ 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:21.028 * Looking for test storage... 00:30:21.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.028 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:21.029 10:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:29.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.163 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:29.164 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:29.164 Found net devices under 0000:31:00.0: cvl_0_0 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:29.164 Found net devices under 0000:31:00.1: cvl_0_1 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:30:29.164 10:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:30.104 10:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:32.015 10:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:37.313 10:46:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:30:37.313 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:37.313 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:37.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:37.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:37.314 Found net devices under 0000:31:00.0: cvl_0_0 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:37.314 Found net devices under 0000:31:00.1: cvl_0_1 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:37.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:30:37.314 00:30:37.314 --- 10.0.0.2 ping statistics --- 00:30:37.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.314 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:30:37.314 00:30:37.314 --- 10.0.0.1 ping statistics --- 00:30:37.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.314 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2113843 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2113843 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2113843 ']' 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.314 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.315 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.315 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.315 10:46:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:37.315 [2024-07-22 10:46:42.782814] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:30:37.315 [2024-07-22 10:46:42.782875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.315 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.315 [2024-07-22 10:46:42.861785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:37.315 [2024-07-22 10:46:42.902381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.315 [2024-07-22 10:46:42.902428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.315 [2024-07-22 10:46:42.902436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.315 [2024-07-22 10:46:42.902443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.315 [2024-07-22 10:46:42.902449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.315 [2024-07-22 10:46:42.902614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.315 [2024-07-22 10:46:42.902736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.315 [2024-07-22 10:46:42.902896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.315 [2024-07-22 10:46:42.902897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.884 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.884 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:37.884 10:46:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:37.884 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.884 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 [2024-07-22 10:46:43.742391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 Malloc1 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.145 [2024-07-22 10:46:43.801694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2114194 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:30:38.145 10:46:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:38.145 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:40.702 "tick_rate": 2400000000, 00:30:40.702 "poll_groups": [ 00:30:40.702 { 00:30:40.702 "name": "nvmf_tgt_poll_group_000", 00:30:40.702 "admin_qpairs": 1, 00:30:40.702 "io_qpairs": 1, 00:30:40.702 "current_admin_qpairs": 1, 00:30:40.702 "current_io_qpairs": 1, 00:30:40.702 "pending_bdev_io": 0, 00:30:40.702 "completed_nvme_io": 20551, 00:30:40.702 "transports": [ 00:30:40.702 { 00:30:40.702 "trtype": "TCP" 00:30:40.702 } 00:30:40.702 ] 00:30:40.702 }, 00:30:40.702 { 00:30:40.702 "name": "nvmf_tgt_poll_group_001", 00:30:40.702 "admin_qpairs": 0, 00:30:40.702 "io_qpairs": 1, 00:30:40.702 "current_admin_qpairs": 0, 00:30:40.702 "current_io_qpairs": 1, 00:30:40.702 "pending_bdev_io": 0, 00:30:40.702 "completed_nvme_io": 28757, 00:30:40.702 "transports": [ 00:30:40.702 { 00:30:40.702 "trtype": "TCP" 00:30:40.702 } 00:30:40.702 ] 00:30:40.702 }, 00:30:40.702 { 00:30:40.702 "name": "nvmf_tgt_poll_group_002", 00:30:40.702 "admin_qpairs": 0, 00:30:40.702 "io_qpairs": 1, 00:30:40.702 "current_admin_qpairs": 0, 00:30:40.702 "current_io_qpairs": 1, 00:30:40.702 "pending_bdev_io": 0, 00:30:40.702 "completed_nvme_io": 22471, 00:30:40.702 "transports": [ 00:30:40.702 { 00:30:40.702 "trtype": "TCP" 00:30:40.702 } 00:30:40.702 ] 00:30:40.702 }, 00:30:40.702 { 00:30:40.702 "name": "nvmf_tgt_poll_group_003", 00:30:40.702 "admin_qpairs": 0, 00:30:40.702 "io_qpairs": 1, 00:30:40.702 "current_admin_qpairs": 0, 00:30:40.702 "current_io_qpairs": 1, 00:30:40.702 "pending_bdev_io": 0, 00:30:40.702 "completed_nvme_io": 20442, 00:30:40.702 "transports": [ 00:30:40.702 { 00:30:40.702 "trtype": "TCP" 00:30:40.702 } 00:30:40.702 ] 00:30:40.702 } 00:30:40.702 ] 00:30:40.702 }' 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:40.702 10:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2114194 00:30:48.836 Initializing NVMe Controllers 00:30:48.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:48.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:48.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:48.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:48.837 Initialization complete. Launching workers. 00:30:48.837 ======================================================== 00:30:48.837 Latency(us) 00:30:48.837 Device Information : IOPS MiB/s Average min max 00:30:48.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13526.80 52.84 4732.07 1357.43 8868.72 00:30:48.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15653.20 61.15 4088.95 1313.75 9788.93 00:30:48.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13790.90 53.87 4640.93 1320.12 12350.52 00:30:48.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11534.20 45.06 5549.92 1624.39 12142.62 00:30:48.837 ======================================================== 00:30:48.837 Total : 54505.10 212.91 4697.39 1313.75 12350.52 00:30:48.837 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:48.837 10:46:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:48.837 rmmod nvme_tcp 00:30:48.837 rmmod nvme_fabrics 00:30:48.837 rmmod nvme_keyring 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2113843 ']' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2113843 ']' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2113843' 00:30:48.837 killing process with pid 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2113843 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.837 10:46:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.744 10:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.744 10:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:30:50.744 10:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:52.663 10:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:54.046 10:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:59.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:59.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:59.477 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:59.478 Found net devices under 0000:31:00.0: cvl_0_0 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:59.478 Found net devices under 0000:31:00.1: cvl_0_1 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:30:59.478 00:30:59.478 --- 10.0.0.2 ping statistics --- 00:30:59.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.478 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:30:59.478 00:30:59.478 --- 10.0.0.1 ping statistics --- 00:30:59.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.478 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:59.478 10:47:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:59.478 net.core.busy_poll = 1 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:59.478 net.core.busy_read = 1 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:59.478 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2118653 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2118653 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2118653 ']' 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.737 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:59.738 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.738 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:59.738 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.738 [2024-07-22 10:47:05.383794] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:30:59.738 [2024-07-22 10:47:05.383842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.738 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.998 [2024-07-22 10:47:05.474001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.998 [2024-07-22 10:47:05.513318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.998 [2024-07-22 10:47:05.513355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.998 [2024-07-22 10:47:05.513365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.998 [2024-07-22 10:47:05.513373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.998 [2024-07-22 10:47:05.513380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.998 [2024-07-22 10:47:05.513453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.998 [2024-07-22 10:47:05.513595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.998 [2024-07-22 10:47:05.513759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.998 [2024-07-22 10:47:05.513760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.998 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 [2024-07-22 10:47:05.716692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 Malloc1 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.258 [2024-07-22 10:47:05.775963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2118681 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:31:00.258 10:47:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:00.258 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:31:02.172 "tick_rate": 2400000000, 00:31:02.172 "poll_groups": [ 00:31:02.172 { 00:31:02.172 "name": "nvmf_tgt_poll_group_000", 00:31:02.172 "admin_qpairs": 1, 00:31:02.172 "io_qpairs": 1, 00:31:02.172 "current_admin_qpairs": 1, 00:31:02.172 "current_io_qpairs": 1, 00:31:02.172 "pending_bdev_io": 0, 00:31:02.172 "completed_nvme_io": 27582, 00:31:02.172 "transports": [ 00:31:02.172 { 00:31:02.172 "trtype": "TCP" 00:31:02.172 } 00:31:02.172 ] 00:31:02.172 }, 00:31:02.172 { 00:31:02.172 "name": "nvmf_tgt_poll_group_001", 00:31:02.172 "admin_qpairs": 0, 00:31:02.172 "io_qpairs": 3, 00:31:02.172 "current_admin_qpairs": 0, 00:31:02.172 "current_io_qpairs": 3, 00:31:02.172 "pending_bdev_io": 0, 00:31:02.172 "completed_nvme_io": 42756, 00:31:02.172 "transports": [ 00:31:02.172 { 00:31:02.172 "trtype": "TCP" 00:31:02.172 } 00:31:02.172 ] 00:31:02.172 }, 00:31:02.172 { 00:31:02.172 "name": "nvmf_tgt_poll_group_002", 00:31:02.172 "admin_qpairs": 0, 00:31:02.172 "io_qpairs": 0, 00:31:02.172 "current_admin_qpairs": 0, 00:31:02.172 "current_io_qpairs": 0, 00:31:02.172 "pending_bdev_io": 0, 00:31:02.172 "completed_nvme_io": 0, 00:31:02.172 "transports": [ 00:31:02.172 { 00:31:02.172 "trtype": "TCP" 00:31:02.172 } 00:31:02.172 ] 00:31:02.172 }, 00:31:02.172 { 00:31:02.172 "name": "nvmf_tgt_poll_group_003", 00:31:02.172 "admin_qpairs": 0, 00:31:02.172 "io_qpairs": 0, 00:31:02.172 "current_admin_qpairs": 0, 00:31:02.172 "current_io_qpairs": 0, 00:31:02.172 "pending_bdev_io": 0, 00:31:02.172 "completed_nvme_io": 0, 00:31:02.172 "transports": [ 00:31:02.172 { 00:31:02.172 "trtype": "TCP" 00:31:02.172 } 00:31:02.172 ] 00:31:02.172 } 00:31:02.172 ] 00:31:02.172 }' 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:31:02.172 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2118681 00:31:10.307 Initializing NVMe Controllers 00:31:10.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:10.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:10.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:10.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:10.307 Initialization complete. Launching workers. 00:31:10.307 ======================================================== 00:31:10.307 Latency(us) 00:31:10.307 Device Information : IOPS MiB/s Average min max 00:31:10.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7151.90 27.94 8949.54 1234.75 53089.81 00:31:10.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8085.20 31.58 7915.13 1259.80 53565.18 00:31:10.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6767.00 26.43 9457.50 1411.05 54647.63 00:31:10.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 18371.30 71.76 3489.74 1116.75 44477.13 00:31:10.307 ======================================================== 00:31:10.307 Total : 40375.40 157.72 6343.26 1116.75 54647.63 00:31:10.307 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:10.307 10:47:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:10.307 rmmod nvme_tcp 00:31:10.307 rmmod nvme_fabrics 00:31:10.568 rmmod nvme_keyring 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2118653 ']' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2118653 ']' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2118653' 00:31:10.568 killing process with pid 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2118653 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.568 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.892 10:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.892 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:13.892 00:31:13.892 real 0m53.166s 00:31:13.892 user 2m46.781s 00:31:13.892 sys 0m11.429s 00:31:13.892 10:47:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.892 10:47:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:13.892 ************************************ 00:31:13.892 END TEST nvmf_perf_adq 00:31:13.892 ************************************ 00:31:13.892 10:47:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:13.892 10:47:19 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:13.892 10:47:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:13.892 10:47:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.892 10:47:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.892 ************************************ 00:31:13.892 START TEST nvmf_shutdown 00:31:13.892 ************************************ 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:13.892 * Looking for test storage... 00:31:13.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.892 10:47:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:13.893 ************************************ 00:31:13.893 START TEST nvmf_shutdown_tc1 00:31:13.893 ************************************ 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:13.893 10:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:22.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:22.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:22.029 Found net devices under 0000:31:00.0: cvl_0_0 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:22.029 Found net devices under 0000:31:00.1: cvl_0_1 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.029 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:22.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:31:22.030 00:31:22.030 --- 10.0.0.2 ping statistics --- 00:31:22.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.030 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:22.030 00:31:22.030 --- 10.0.0.1 ping statistics --- 00:31:22.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.030 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2125528 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2125528 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2125528 ']' 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.030 10:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:22.289 [2024-07-22 10:47:27.751721] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:22.289 [2024-07-22 10:47:27.751783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.289 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.289 [2024-07-22 10:47:27.845795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.289 [2024-07-22 10:47:27.894661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.289 [2024-07-22 10:47:27.894713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.289 [2024-07-22 10:47:27.894721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.289 [2024-07-22 10:47:27.894728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.289 [2024-07-22 10:47:27.894734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.289 [2024-07-22 10:47:27.894856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.289 [2024-07-22 10:47:27.895013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.289 [2024-07-22 10:47:27.895177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.289 [2024-07-22 10:47:27.895178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:22.857 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:22.858 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:31:22.858 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:22.858 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.858 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.130 [2024-07-22 10:47:28.579080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.130 10:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.130 Malloc1 00:31:23.130 [2024-07-22 10:47:28.682526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.130 Malloc2 00:31:23.130 Malloc3 00:31:23.130 Malloc4 00:31:23.130 Malloc5 00:31:23.390 Malloc6 00:31:23.390 Malloc7 00:31:23.390 Malloc8 00:31:23.390 Malloc9 00:31:23.390 Malloc10 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2125874 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2125874 /var/tmp/bdevperf.sock 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2125874 ']' 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.390 { 00:31:23.390 "params": { 00:31:23.390 "name": "Nvme$subsystem", 00:31:23.390 "trtype": "$TEST_TRANSPORT", 00:31:23.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.390 "adrfam": "ipv4", 00:31:23.390 "trsvcid": "$NVMF_PORT", 00:31:23.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.390 "hdgst": ${hdgst:-false}, 00:31:23.390 "ddgst": ${ddgst:-false} 00:31:23.390 }, 00:31:23.390 "method": "bdev_nvme_attach_controller" 00:31:23.390 } 00:31:23.390 EOF 00:31:23.390 )") 00:31:23.390 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 [2024-07-22 10:47:29.128504] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:23.651 [2024-07-22 10:47:29.128556] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.651 { 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme$subsystem", 00:31:23.651 "trtype": "$TEST_TRANSPORT", 00:31:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "$NVMF_PORT", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.651 "hdgst": ${hdgst:-false}, 00:31:23.651 "ddgst": ${ddgst:-false} 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 } 00:31:23.651 EOF 00:31:23.651 )") 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:23.651 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:23.651 10:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme1", 00:31:23.651 "trtype": "tcp", 00:31:23.651 "traddr": "10.0.0.2", 00:31:23.651 "adrfam": "ipv4", 00:31:23.651 "trsvcid": "4420", 00:31:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.651 "hdgst": false, 00:31:23.651 "ddgst": false 00:31:23.651 }, 00:31:23.651 "method": "bdev_nvme_attach_controller" 00:31:23.651 },{ 00:31:23.651 "params": { 00:31:23.651 "name": "Nvme2", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme3", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme4", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme5", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme6", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme7", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme8", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme9", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 },{ 00:31:23.652 "params": { 00:31:23.652 "name": "Nvme10", 00:31:23.652 "trtype": "tcp", 00:31:23.652 "traddr": "10.0.0.2", 00:31:23.652 "adrfam": "ipv4", 00:31:23.652 "trsvcid": "4420", 00:31:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:23.652 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:23.652 "hdgst": false, 00:31:23.652 "ddgst": false 00:31:23.652 }, 00:31:23.652 "method": "bdev_nvme_attach_controller" 00:31:23.652 }' 00:31:23.652 [2024-07-22 10:47:29.194154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.652 [2024-07-22 10:47:29.225420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2125874 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:31:25.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2125874 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:25.032 10:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2125528 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.970 { 00:31:25.970 "params": { 00:31:25.970 "name": "Nvme$subsystem", 00:31:25.970 "trtype": "$TEST_TRANSPORT", 00:31:25.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.970 "adrfam": "ipv4", 00:31:25.970 "trsvcid": "$NVMF_PORT", 00:31:25.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.970 "hdgst": ${hdgst:-false}, 00:31:25.970 "ddgst": ${ddgst:-false} 00:31:25.970 }, 00:31:25.970 "method": "bdev_nvme_attach_controller" 00:31:25.970 } 00:31:25.970 EOF 00:31:25.970 )") 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.970 { 00:31:25.970 "params": { 00:31:25.970 "name": "Nvme$subsystem", 00:31:25.970 "trtype": "$TEST_TRANSPORT", 00:31:25.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.970 "adrfam": "ipv4", 00:31:25.970 "trsvcid": "$NVMF_PORT", 00:31:25.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.970 "hdgst": ${hdgst:-false}, 00:31:25.970 "ddgst": ${ddgst:-false} 00:31:25.970 }, 00:31:25.970 "method": "bdev_nvme_attach_controller" 00:31:25.970 } 00:31:25.970 EOF 00:31:25.970 )") 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.970 { 00:31:25.970 "params": { 00:31:25.970 "name": "Nvme$subsystem", 00:31:25.970 "trtype": "$TEST_TRANSPORT", 00:31:25.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.970 "adrfam": "ipv4", 00:31:25.970 "trsvcid": "$NVMF_PORT", 00:31:25.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.970 "hdgst": ${hdgst:-false}, 00:31:25.970 "ddgst": ${ddgst:-false} 00:31:25.970 }, 00:31:25.970 "method": "bdev_nvme_attach_controller" 00:31:25.970 } 00:31:25.970 EOF 00:31:25.970 )") 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.970 { 00:31:25.970 "params": { 00:31:25.970 "name": "Nvme$subsystem", 00:31:25.970 "trtype": "$TEST_TRANSPORT", 00:31:25.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.970 "adrfam": "ipv4", 00:31:25.970 "trsvcid": "$NVMF_PORT", 00:31:25.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.970 "hdgst": ${hdgst:-false}, 00:31:25.970 "ddgst": ${ddgst:-false} 00:31:25.970 }, 00:31:25.970 "method": "bdev_nvme_attach_controller" 00:31:25.970 } 00:31:25.970 EOF 00:31:25.970 )") 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.970 [2024-07-22 10:47:31.517964] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:25.970 [2024-07-22 10:47:31.518013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126399 ] 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.970 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.970 { 00:31:25.970 "params": { 00:31:25.970 "name": "Nvme$subsystem", 00:31:25.970 "trtype": "$TEST_TRANSPORT", 00:31:25.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.970 "adrfam": "ipv4", 00:31:25.970 "trsvcid": "$NVMF_PORT", 00:31:25.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.970 "hdgst": ${hdgst:-false}, 00:31:25.970 "ddgst": ${ddgst:-false} 00:31:25.970 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.971 { 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme$subsystem", 00:31:25.971 "trtype": "$TEST_TRANSPORT", 00:31:25.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "$NVMF_PORT", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.971 "hdgst": ${hdgst:-false}, 00:31:25.971 "ddgst": ${ddgst:-false} 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.971 { 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme$subsystem", 00:31:25.971 "trtype": "$TEST_TRANSPORT", 00:31:25.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "$NVMF_PORT", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.971 "hdgst": ${hdgst:-false}, 00:31:25.971 "ddgst": ${ddgst:-false} 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.971 { 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme$subsystem", 00:31:25.971 "trtype": "$TEST_TRANSPORT", 00:31:25.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "$NVMF_PORT", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.971 "hdgst": ${hdgst:-false}, 00:31:25.971 "ddgst": ${ddgst:-false} 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.971 { 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme$subsystem", 00:31:25.971 "trtype": "$TEST_TRANSPORT", 00:31:25.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "$NVMF_PORT", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.971 "hdgst": ${hdgst:-false}, 00:31:25.971 "ddgst": ${ddgst:-false} 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.971 { 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme$subsystem", 00:31:25.971 "trtype": "$TEST_TRANSPORT", 00:31:25.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "$NVMF_PORT", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.971 "hdgst": ${hdgst:-false}, 00:31:25.971 "ddgst": ${ddgst:-false} 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 } 00:31:25.971 EOF 00:31:25.971 )") 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:25.971 10:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme1", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme2", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme3", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme4", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme5", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme6", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme7", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme8", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme9", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 },{ 00:31:25.971 "params": { 00:31:25.971 "name": "Nvme10", 00:31:25.971 "trtype": "tcp", 00:31:25.971 "traddr": "10.0.0.2", 00:31:25.971 "adrfam": "ipv4", 00:31:25.971 "trsvcid": "4420", 00:31:25.971 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:25.971 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:25.971 "hdgst": false, 00:31:25.971 "ddgst": false 00:31:25.971 }, 00:31:25.971 "method": "bdev_nvme_attach_controller" 00:31:25.971 }' 00:31:25.971 [2024-07-22 10:47:31.584069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.971 [2024-07-22 10:47:31.615509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.352 Running I/O for 1 seconds... 00:31:28.732 00:31:28.732 Latency(us) 00:31:28.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.732 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme1n1 : 1.11 230.94 14.43 0.00 0.00 274240.21 16165.55 249910.61 00:31:28.732 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme2n1 : 1.12 228.82 14.30 0.00 0.00 271740.59 18350.08 249910.61 00:31:28.732 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme3n1 : 1.06 251.87 15.74 0.00 0.00 239494.76 4150.61 251658.24 00:31:28.732 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme4n1 : 1.20 265.12 16.57 0.00 0.00 226962.74 17694.72 230686.72 00:31:28.732 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme5n1 : 1.21 216.48 13.53 0.00 0.00 272312.12 8028.16 267386.88 00:31:28.732 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme6n1 : 1.11 229.66 14.35 0.00 0.00 251628.16 15837.87 253405.87 00:31:28.732 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme7n1 : 1.21 264.85 16.55 0.00 0.00 216001.45 13544.11 258648.75 00:31:28.732 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme8n1 : 1.22 262.53 16.41 0.00 0.00 214147.93 14636.37 234181.97 00:31:28.732 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme9n1 : 1.22 262.32 16.40 0.00 0.00 210406.91 16711.68 255153.49 00:31:28.732 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:28.732 Verification LBA range: start 0x0 length 0x400 00:31:28.732 Nvme10n1 : 1.22 261.71 16.36 0.00 0.00 207128.32 8901.97 260396.37 00:31:28.732 =================================================================================================================== 00:31:28.732 Total : 2474.30 154.64 0.00 0.00 235876.69 4150.61 267386.88 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.732 rmmod nvme_tcp 00:31:28.732 rmmod nvme_fabrics 00:31:28.732 rmmod nvme_keyring 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2125528 ']' 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2125528 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2125528 ']' 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2125528 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.732 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2125528 00:31:28.991 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:28.991 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:28.991 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2125528' 00:31:28.991 killing process with pid 2125528 00:31:28.991 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2125528 00:31:28.991 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2125528 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.252 10:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:31.171 00:31:31.171 real 0m17.193s 00:31:31.171 user 0m33.292s 00:31:31.171 sys 0m7.186s 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:31.171 ************************************ 00:31:31.171 END TEST nvmf_shutdown_tc1 00:31:31.171 ************************************ 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:31.171 ************************************ 00:31:31.171 START TEST nvmf_shutdown_tc2 00:31:31.171 ************************************ 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:31.171 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:31.171 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:31.171 Found net devices under 0000:31:00.0: cvl_0_0 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:31.171 Found net devices under 0000:31:00.1: cvl_0_1 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.171 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:31.172 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.432 10:47:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.432 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.432 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.432 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:31.432 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:31.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:31:31.693 00:31:31.693 --- 10.0.0.2 ping statistics --- 00:31:31.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.693 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:31:31.693 00:31:31.693 --- 10.0.0.1 ping statistics --- 00:31:31.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.693 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2127663 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2127663 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2127663 ']' 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 10:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:31.693 [2024-07-22 10:47:37.254780] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:31.693 [2024-07-22 10:47:37.254842] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.693 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.693 [2024-07-22 10:47:37.346675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.693 [2024-07-22 10:47:37.386294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.693 [2024-07-22 10:47:37.386334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.693 [2024-07-22 10:47:37.386340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.693 [2024-07-22 10:47:37.386344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.693 [2024-07-22 10:47:37.386349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.693 [2024-07-22 10:47:37.386467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.693 [2024-07-22 10:47:37.386604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.693 [2024-07-22 10:47:37.386763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.693 [2024-07-22 10:47:37.386765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 [2024-07-22 10:47:38.071675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.635 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 Malloc1 00:31:32.635 [2024-07-22 10:47:38.170386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.635 Malloc2 00:31:32.635 Malloc3 00:31:32.635 Malloc4 00:31:32.635 Malloc5 00:31:32.912 Malloc6 00:31:32.912 Malloc7 00:31:32.912 Malloc8 00:31:32.912 Malloc9 00:31:32.912 Malloc10 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2127870 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2127870 /var/tmp/bdevperf.sock 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2127870 ']' 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.912 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.912 { 00:31:32.912 "params": { 00:31:32.912 "name": "Nvme$subsystem", 00:31:32.912 "trtype": "$TEST_TRANSPORT", 00:31:32.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.912 "adrfam": "ipv4", 00:31:32.912 "trsvcid": "$NVMF_PORT", 00:31:32.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.912 "hdgst": ${hdgst:-false}, 00:31:32.912 "ddgst": ${ddgst:-false} 00:31:32.912 }, 00:31:32.912 "method": "bdev_nvme_attach_controller" 00:31:32.912 } 00:31:32.912 EOF 00:31:32.912 )") 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.913 { 00:31:32.913 "params": { 00:31:32.913 "name": "Nvme$subsystem", 00:31:32.913 "trtype": "$TEST_TRANSPORT", 00:31:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.913 "adrfam": "ipv4", 00:31:32.913 "trsvcid": "$NVMF_PORT", 00:31:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.913 "hdgst": ${hdgst:-false}, 00:31:32.913 "ddgst": ${ddgst:-false} 00:31:32.913 }, 00:31:32.913 "method": "bdev_nvme_attach_controller" 00:31:32.913 } 00:31:32.913 EOF 00:31:32.913 )") 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.913 { 00:31:32.913 "params": { 00:31:32.913 "name": "Nvme$subsystem", 00:31:32.913 "trtype": "$TEST_TRANSPORT", 00:31:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.913 "adrfam": "ipv4", 00:31:32.913 "trsvcid": "$NVMF_PORT", 00:31:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.913 "hdgst": ${hdgst:-false}, 00:31:32.913 "ddgst": ${ddgst:-false} 00:31:32.913 }, 00:31:32.913 "method": "bdev_nvme_attach_controller" 00:31:32.913 } 00:31:32.913 EOF 00:31:32.913 )") 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.913 { 00:31:32.913 "params": { 00:31:32.913 "name": "Nvme$subsystem", 00:31:32.913 "trtype": "$TEST_TRANSPORT", 00:31:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.913 "adrfam": "ipv4", 00:31:32.913 "trsvcid": "$NVMF_PORT", 00:31:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.913 "hdgst": ${hdgst:-false}, 00:31:32.913 "ddgst": ${ddgst:-false} 00:31:32.913 }, 00:31:32.913 "method": "bdev_nvme_attach_controller" 00:31:32.913 } 00:31:32.913 EOF 00:31:32.913 )") 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.913 { 00:31:32.913 "params": { 00:31:32.913 "name": "Nvme$subsystem", 00:31:32.913 "trtype": "$TEST_TRANSPORT", 00:31:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.913 "adrfam": "ipv4", 00:31:32.913 "trsvcid": "$NVMF_PORT", 00:31:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.913 "hdgst": ${hdgst:-false}, 00:31:32.913 "ddgst": ${ddgst:-false} 00:31:32.913 }, 00:31:32.913 "method": "bdev_nvme_attach_controller" 00:31:32.913 } 00:31:32.913 EOF 00:31:32.913 )") 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.913 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.913 { 00:31:32.913 "params": { 00:31:32.913 "name": "Nvme$subsystem", 00:31:32.913 "trtype": "$TEST_TRANSPORT", 00:31:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.913 "adrfam": "ipv4", 00:31:32.913 "trsvcid": "$NVMF_PORT", 00:31:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.913 "hdgst": ${hdgst:-false}, 00:31:32.913 "ddgst": ${ddgst:-false} 00:31:32.913 }, 00:31:32.913 "method": "bdev_nvme_attach_controller" 00:31:32.913 } 00:31:32.913 EOF 00:31:32.913 )") 00:31:33.174 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.175 [2024-07-22 10:47:38.616654] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:33.175 [2024-07-22 10:47:38.616709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127870 ] 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.175 { 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme$subsystem", 00:31:33.175 "trtype": "$TEST_TRANSPORT", 00:31:33.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "$NVMF_PORT", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.175 "hdgst": ${hdgst:-false}, 00:31:33.175 "ddgst": ${ddgst:-false} 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 } 00:31:33.175 EOF 00:31:33.175 )") 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.175 { 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme$subsystem", 00:31:33.175 "trtype": "$TEST_TRANSPORT", 00:31:33.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "$NVMF_PORT", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.175 "hdgst": ${hdgst:-false}, 00:31:33.175 "ddgst": ${ddgst:-false} 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 } 00:31:33.175 EOF 00:31:33.175 )") 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.175 { 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme$subsystem", 00:31:33.175 "trtype": "$TEST_TRANSPORT", 00:31:33.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "$NVMF_PORT", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.175 "hdgst": ${hdgst:-false}, 00:31:33.175 "ddgst": ${ddgst:-false} 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 } 00:31:33.175 EOF 00:31:33.175 )") 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.175 { 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme$subsystem", 00:31:33.175 "trtype": "$TEST_TRANSPORT", 00:31:33.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "$NVMF_PORT", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.175 "hdgst": ${hdgst:-false}, 00:31:33.175 "ddgst": ${ddgst:-false} 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 } 00:31:33.175 EOF 00:31:33.175 )") 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:33.175 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:31:33.175 10:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme1", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme2", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme3", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme4", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme5", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme6", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme7", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:33.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:33.175 "hdgst": false, 00:31:33.175 "ddgst": false 00:31:33.175 }, 00:31:33.175 "method": "bdev_nvme_attach_controller" 00:31:33.175 },{ 00:31:33.175 "params": { 00:31:33.175 "name": "Nvme8", 00:31:33.175 "trtype": "tcp", 00:31:33.175 "traddr": "10.0.0.2", 00:31:33.175 "adrfam": "ipv4", 00:31:33.175 "trsvcid": "4420", 00:31:33.176 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:33.176 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:33.176 "hdgst": false, 00:31:33.176 "ddgst": false 00:31:33.176 }, 00:31:33.176 "method": "bdev_nvme_attach_controller" 00:31:33.176 },{ 00:31:33.176 "params": { 00:31:33.176 "name": "Nvme9", 00:31:33.176 "trtype": "tcp", 00:31:33.176 "traddr": "10.0.0.2", 00:31:33.176 "adrfam": "ipv4", 00:31:33.176 "trsvcid": "4420", 00:31:33.176 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:33.176 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:33.176 "hdgst": false, 00:31:33.176 "ddgst": false 00:31:33.176 }, 00:31:33.176 "method": "bdev_nvme_attach_controller" 00:31:33.176 },{ 00:31:33.176 "params": { 00:31:33.176 "name": "Nvme10", 00:31:33.176 "trtype": "tcp", 00:31:33.176 "traddr": "10.0.0.2", 00:31:33.176 "adrfam": "ipv4", 00:31:33.176 "trsvcid": "4420", 00:31:33.176 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:33.176 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:33.176 "hdgst": false, 00:31:33.176 "ddgst": false 00:31:33.176 }, 00:31:33.176 "method": "bdev_nvme_attach_controller" 00:31:33.176 }' 00:31:33.176 [2024-07-22 10:47:38.682618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.176 [2024-07-22 10:47:38.713794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.086 Running I/O for 10 seconds... 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:35.086 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.087 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.346 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.346 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:35.346 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:35.346 10:47:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2127870 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2127870 ']' 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2127870 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2127870 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2127870' 00:31:35.605 killing process with pid 2127870 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2127870 00:31:35.605 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2127870 00:31:35.605 Received shutdown signal, test time was about 0.961969 seconds 00:31:35.605 00:31:35.605 Latency(us) 00:31:35.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.605 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme1n1 : 0.94 204.31 12.77 0.00 0.00 309557.19 22609.92 269134.51 00:31:35.605 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme2n1 : 0.96 267.75 16.73 0.00 0.00 231333.33 21408.43 244667.73 00:31:35.605 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme3n1 : 0.95 270.88 16.93 0.00 0.00 223882.99 12014.93 251658.24 00:31:35.605 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme4n1 : 0.95 269.09 16.82 0.00 0.00 220914.99 18896.21 244667.73 00:31:35.605 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme5n1 : 0.96 267.49 16.72 0.00 0.00 216055.68 14527.15 221948.59 00:31:35.605 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme6n1 : 0.95 269.95 16.87 0.00 0.00 210631.68 34734.08 228939.09 00:31:35.605 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme7n1 : 0.96 266.37 16.65 0.00 0.00 209022.08 16056.32 256901.12 00:31:35.605 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme8n1 : 0.92 208.02 13.00 0.00 0.00 259528.82 15182.51 246415.36 00:31:35.605 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme9n1 : 0.94 204.96 12.81 0.00 0.00 258168.04 22173.01 242920.11 00:31:35.605 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.605 Verification LBA range: start 0x0 length 0x400 00:31:35.605 Nvme10n1 : 0.93 206.04 12.88 0.00 0.00 250228.62 23483.73 217579.52 00:31:35.605 =================================================================================================================== 00:31:35.605 Total : 2434.86 152.18 0.00 0.00 235550.31 12014.93 269134.51 00:31:35.863 10:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2127663 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:36.816 rmmod nvme_tcp 00:31:36.816 rmmod nvme_fabrics 00:31:36.816 rmmod nvme_keyring 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2127663 ']' 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2127663 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2127663 ']' 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2127663 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.816 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2127663 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2127663' 00:31:37.086 killing process with pid 2127663 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2127663 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2127663 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.086 10:47:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.625 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:39.625 00:31:39.625 real 0m7.983s 00:31:39.625 user 0m24.285s 00:31:39.625 sys 0m1.254s 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:39.626 ************************************ 00:31:39.626 END TEST nvmf_shutdown_tc2 00:31:39.626 ************************************ 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:39.626 ************************************ 00:31:39.626 START TEST nvmf_shutdown_tc3 00:31:39.626 ************************************ 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:39.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:39.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:39.626 Found net devices under 0000:31:00.0: cvl_0_0 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:39.626 Found net devices under 0000:31:00.1: cvl_0_1 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.626 10:47:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.626 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:39.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:31:39.627 00:31:39.627 --- 10.0.0.2 ping statistics --- 00:31:39.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.627 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:39.627 00:31:39.627 --- 10.0.0.1 ping statistics --- 00:31:39.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.627 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2129200 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2129200 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2129200 ']' 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.627 10:47:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:39.887 [2024-07-22 10:47:45.324987] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:39.887 [2024-07-22 10:47:45.325046] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.887 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.887 [2024-07-22 10:47:45.419274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.887 [2024-07-22 10:47:45.454531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.887 [2024-07-22 10:47:45.454566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.887 [2024-07-22 10:47:45.454571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.887 [2024-07-22 10:47:45.454576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.887 [2024-07-22 10:47:45.454580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.887 [2024-07-22 10:47:45.454697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.887 [2024-07-22 10:47:45.454857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.887 [2024-07-22 10:47:45.455019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.887 [2024-07-22 10:47:45.455021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 [2024-07-22 10:47:46.148650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:40.457 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.717 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.718 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.718 Malloc1 00:31:40.718 [2024-07-22 10:47:46.247400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.718 Malloc2 00:31:40.718 Malloc3 00:31:40.718 Malloc4 00:31:40.718 Malloc5 00:31:40.718 Malloc6 00:31:40.978 Malloc7 00:31:40.978 Malloc8 00:31:40.978 Malloc9 00:31:40.978 Malloc10 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2129575 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2129575 /var/tmp/bdevperf.sock 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2129575 ']' 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.978 { 00:31:40.978 "params": { 00:31:40.978 "name": "Nvme$subsystem", 00:31:40.978 "trtype": "$TEST_TRANSPORT", 00:31:40.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.978 "adrfam": "ipv4", 00:31:40.978 "trsvcid": "$NVMF_PORT", 00:31:40.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.978 "hdgst": ${hdgst:-false}, 00:31:40.978 "ddgst": ${ddgst:-false} 00:31:40.978 }, 00:31:40.978 "method": "bdev_nvme_attach_controller" 00:31:40.978 } 00:31:40.978 EOF 00:31:40.978 )") 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.978 { 00:31:40.978 "params": { 00:31:40.978 "name": "Nvme$subsystem", 00:31:40.978 "trtype": "$TEST_TRANSPORT", 00:31:40.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.978 "adrfam": "ipv4", 00:31:40.978 "trsvcid": "$NVMF_PORT", 00:31:40.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.978 "hdgst": ${hdgst:-false}, 00:31:40.978 "ddgst": ${ddgst:-false} 00:31:40.978 }, 00:31:40.978 "method": "bdev_nvme_attach_controller" 00:31:40.978 } 00:31:40.978 EOF 00:31:40.978 )") 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.978 { 00:31:40.978 "params": { 00:31:40.978 "name": "Nvme$subsystem", 00:31:40.978 "trtype": "$TEST_TRANSPORT", 00:31:40.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.978 "adrfam": "ipv4", 00:31:40.978 "trsvcid": "$NVMF_PORT", 00:31:40.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.978 "hdgst": ${hdgst:-false}, 00:31:40.978 "ddgst": ${ddgst:-false} 00:31:40.978 }, 00:31:40.978 "method": "bdev_nvme_attach_controller" 00:31:40.978 } 00:31:40.978 EOF 00:31:40.978 )") 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.978 { 00:31:40.978 "params": { 00:31:40.978 "name": "Nvme$subsystem", 00:31:40.978 "trtype": "$TEST_TRANSPORT", 00:31:40.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.978 "adrfam": "ipv4", 00:31:40.978 "trsvcid": "$NVMF_PORT", 00:31:40.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.978 "hdgst": ${hdgst:-false}, 00:31:40.978 "ddgst": ${ddgst:-false} 00:31:40.978 }, 00:31:40.978 "method": "bdev_nvme_attach_controller" 00:31:40.978 } 00:31:40.978 EOF 00:31:40.978 )") 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.978 { 00:31:40.978 "params": { 00:31:40.978 "name": "Nvme$subsystem", 00:31:40.978 "trtype": "$TEST_TRANSPORT", 00:31:40.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.978 "adrfam": "ipv4", 00:31:40.978 "trsvcid": "$NVMF_PORT", 00:31:40.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.978 "hdgst": ${hdgst:-false}, 00:31:40.978 "ddgst": ${ddgst:-false} 00:31:40.978 }, 00:31:40.978 "method": "bdev_nvme_attach_controller" 00:31:40.978 } 00:31:40.978 EOF 00:31:40.978 )") 00:31:40.978 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.239 { 00:31:41.239 "params": { 00:31:41.239 "name": "Nvme$subsystem", 00:31:41.239 "trtype": "$TEST_TRANSPORT", 00:31:41.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.239 "adrfam": "ipv4", 00:31:41.239 "trsvcid": "$NVMF_PORT", 00:31:41.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.239 "hdgst": ${hdgst:-false}, 00:31:41.239 "ddgst": ${ddgst:-false} 00:31:41.239 }, 00:31:41.239 "method": "bdev_nvme_attach_controller" 00:31:41.239 } 00:31:41.239 EOF 00:31:41.239 )") 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.239 { 00:31:41.239 "params": { 00:31:41.239 "name": "Nvme$subsystem", 00:31:41.239 "trtype": "$TEST_TRANSPORT", 00:31:41.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.239 "adrfam": "ipv4", 00:31:41.239 "trsvcid": "$NVMF_PORT", 00:31:41.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.239 "hdgst": ${hdgst:-false}, 00:31:41.239 "ddgst": ${ddgst:-false} 00:31:41.239 }, 00:31:41.239 "method": "bdev_nvme_attach_controller" 00:31:41.239 } 00:31:41.239 EOF 00:31:41.239 )") 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.239 [2024-07-22 10:47:46.693668] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:41.239 [2024-07-22 10:47:46.693720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129575 ] 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.239 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.239 { 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme$subsystem", 00:31:41.240 "trtype": "$TEST_TRANSPORT", 00:31:41.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "$NVMF_PORT", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.240 "hdgst": ${hdgst:-false}, 00:31:41.240 "ddgst": ${ddgst:-false} 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 } 00:31:41.240 EOF 00:31:41.240 )") 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.240 { 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme$subsystem", 00:31:41.240 "trtype": "$TEST_TRANSPORT", 00:31:41.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "$NVMF_PORT", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.240 "hdgst": ${hdgst:-false}, 00:31:41.240 "ddgst": ${ddgst:-false} 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 } 00:31:41.240 EOF 00:31:41.240 )") 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.240 { 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme$subsystem", 00:31:41.240 "trtype": "$TEST_TRANSPORT", 00:31:41.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "$NVMF_PORT", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.240 "hdgst": ${hdgst:-false}, 00:31:41.240 "ddgst": ${ddgst:-false} 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 } 00:31:41.240 EOF 00:31:41.240 )") 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:31:41.240 10:47:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme1", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme2", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme3", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme4", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme5", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme6", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme7", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme8", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme9", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 },{ 00:31:41.240 "params": { 00:31:41.240 "name": "Nvme10", 00:31:41.240 "trtype": "tcp", 00:31:41.240 "traddr": "10.0.0.2", 00:31:41.240 "adrfam": "ipv4", 00:31:41.240 "trsvcid": "4420", 00:31:41.240 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:41.240 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:41.240 "hdgst": false, 00:31:41.240 "ddgst": false 00:31:41.240 }, 00:31:41.240 "method": "bdev_nvme_attach_controller" 00:31:41.240 }' 00:31:41.240 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.240 [2024-07-22 10:47:46.759845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.240 [2024-07-22 10:47:46.791253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.625 Running I/O for 10 seconds... 00:31:42.625 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:42.625 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:42.625 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:42.625 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.625 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:42.886 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:43.147 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.411 10:47:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:31:43.411 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2129200 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2129200 ']' 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2129200 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2129200 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2129200' 00:31:43.412 killing process with pid 2129200 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2129200 00:31:43.412 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2129200 00:31:43.412 [2024-07-22 10:47:49.078850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.078999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.079174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81550 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.412 [2024-07-22 10:47:49.083133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.412 [2024-07-22 10:47:49.083152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.412 [2024-07-22 10:47:49.083160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.412 [2024-07-22 10:47:49.083170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.412 [2024-07-22 10:47:49.083178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.412 [2024-07-22 10:47:49.083171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.412 [2024-07-22 10:47:49.083193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.083200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.412 the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-07-22 10:47:49.083214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.412 the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.412 [2024-07-22 10:47:49.083226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.412 [2024-07-22 10:47:49.083231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.083313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.083475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.083513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82380 is same with the state(5) to be set 00:31:43.413 [2024-07-22 10:47:49.083532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.413 [2024-07-22 10:47:49.083543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.413 [2024-07-22 10:47:49.083550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.083988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.083995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 [2024-07-22 10:47:49.084181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.084184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 the state(5) to be set 00:31:43.414 [2024-07-22 10:47:49.084200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.414 [2024-07-22 10:47:49.084201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.414 [2024-07-22 10:47:49.084205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.414 [2024-07-22 10:47:49.084209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.084211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.414 the state(5) to be set 00:31:43.414 [2024-07-22 10:47:49.084218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-07-22 10:47:49.084223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.415 the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with [2024-07-22 10:47:49.084240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12the state(5) to be set 00:31:43.415 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.415 [2024-07-22 10:47:49.084246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:43.415 [2024-07-22 10:47:49.084283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82850 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084573] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cda390 was disconnected and freed. reset controller. 00:31:43.415 [2024-07-22 10:47:49.084654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d05ca0 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e90690 is same with the state(5) to be set 00:31:43.415 [2024-07-22 10:47:49.084840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.415 [2024-07-22 10:47:49.084856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.415 [2024-07-22 10:47:49.084863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeb9b0 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.084926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.084979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.084986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cffb70 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6f50 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.416 [2024-07-22 10:47:49.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.416 [2024-07-22 10:47:49.085160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d07480 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.416 [2024-07-22 10:47:49.085540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82d00 is same with the state(5) to be set 00:31:43.417 [2024-07-22 10:47:49.085891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.085921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.085940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.085957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.085973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.085990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.085997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.417 [2024-07-22 10:47:49.086467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.417 [2024-07-22 10:47:49.086474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:43.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1the state(5) to be set 00:31:43.418 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:43.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-07-22 10:47:49.086610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.086638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:43.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1the state(5) to be set 00:31:43.418 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.086695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.086750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-07-22 10:47:49.086761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.418 [2024-07-22 10:47:49.086807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.418 [2024-07-22 10:47:49.086817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.418 [2024-07-22 10:47:49.086823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:43.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with [2024-07-22 10:47:49.086834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:31:43.419 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 10:47:49.086862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-07-22 10:47:49.086873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f831b0 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.086890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.086941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.419 [2024-07-22 10:47:49.086948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.419 [2024-07-22 10:47:49.087501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.087966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.088957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83660 is same with the state(5) to be set 00:31:43.419 [2024-07-22 10:47:49.089455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.089983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.090335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.420 [2024-07-22 10:47:49.102390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:43.420 [2024-07-22 10:47:49.102536] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d34b20 was disconnected and freed. reset controller. 00:31:43.420 [2024-07-22 10:47:49.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.420 [2024-07-22 10:47:49.102921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.420 [2024-07-22 10:47:49.102928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.102937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.102945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.102954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.102961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.102970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.102977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.102986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.102993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.421 [2024-07-22 10:47:49.103622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.421 [2024-07-22 10:47:49.103631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.422 [2024-07-22 10:47:49.103638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.422 [2024-07-22 10:47:49.103647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7d0 is same with the state(5) to be set 00:31:43.422 [2024-07-22 10:47:49.103681] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ca7d0 was disconnected and freed. reset controller. 00:31:43.690 [2024-07-22 10:47:49.105109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:43.690 [2024-07-22 10:47:49.105137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeb9b0 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.105187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4910 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.105266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05ca0 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.105280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e90690 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.105313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9e420 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.105389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cffb70 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.105407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6f50 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.105431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.690 [2024-07-22 10:47:49.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.690 [2024-07-22 10:47:49.105494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7610 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.105510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07480 (9): Bad file descriptor 00:31:43.690 [2024-07-22 10:47:49.107378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.690 [2024-07-22 10:47:49.107520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.107559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83b10 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.108106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:43.691 [2024-07-22 10:47:49.108127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:43.691 [2024-07-22 10:47:49.109062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.691 [2024-07-22 10:47:49.109085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeb9b0 with addr=10.0.0.2, port=4420 00:31:43.691 [2024-07-22 10:47:49.109094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeb9b0 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.109628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.691 [2024-07-22 10:47:49.109664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cffb70 with addr=10.0.0.2, port=4420 00:31:43.691 [2024-07-22 10:47:49.109676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cffb70 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.110065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.691 [2024-07-22 10:47:49.110076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d05ca0 with addr=10.0.0.2, port=4420 00:31:43.691 [2024-07-22 10:47:49.110083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d05ca0 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.110882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeb9b0 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.110903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cffb70 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.110918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05ca0 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.110970] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111023] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111057] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111092] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111124] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111160] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:43.691 [2024-07-22 10:47:49.111225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:43.691 [2024-07-22 10:47:49.111233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:43.691 [2024-07-22 10:47:49.111248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:43.691 [2024-07-22 10:47:49.111255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:43.691 [2024-07-22 10:47:49.111261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:43.691 [2024-07-22 10:47:49.111272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:43.691 [2024-07-22 10:47:49.111278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:43.691 [2024-07-22 10:47:49.111285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:43.691 [2024-07-22 10:47:49.111386] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.691 [2024-07-22 10:47:49.111408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.691 [2024-07-22 10:47:49.111415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.691 [2024-07-22 10:47:49.111422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.691 [2024-07-22 10:47:49.115142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea4910 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.115186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.691 [2024-07-22 10:47:49.115196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.691 [2024-07-22 10:47:49.115213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.691 [2024-07-22 10:47:49.115228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.691 [2024-07-22 10:47:49.115243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee49b0 is same with the state(5) to be set 00:31:43.691 [2024-07-22 10:47:49.115270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9e420 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.115293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7610 (9): Bad file descriptor 00:31:43.691 [2024-07-22 10:47:49.115387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.691 [2024-07-22 10:47:49.115699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.691 [2024-07-22 10:47:49.115706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.115985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.115995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.692 [2024-07-22 10:47:49.116389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.692 [2024-07-22 10:47:49.116399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.116411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.116418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.116429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.116436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.116446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.116453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.116461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d338e0 is same with the state(5) to be set 00:31:43.693 [2024-07-22 10:47:49.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.117989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.117998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.693 [2024-07-22 10:47:49.118380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.693 [2024-07-22 10:47:49.118390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.118796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.118804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbd40 is same with the state(5) to be set 00:31:43.694 [2024-07-22 10:47:49.120064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.694 [2024-07-22 10:47:49.120220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.694 [2024-07-22 10:47:49.120227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.695 [2024-07-22 10:47:49.120923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.695 [2024-07-22 10:47:49.120932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.120939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.120948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.120964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.120971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.120981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.120987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.120997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.121118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.121126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db24b0 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.122392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:43.696 [2024-07-22 10:47:49.122411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:43.696 [2024-07-22 10:47:49.122421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:43.696 [2024-07-22 10:47:49.122933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.122949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c6f50 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.122957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6f50 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.123304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.123314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d07480 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.123322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d07480 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.123798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.123838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e90690 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.123849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e90690 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.124661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:43.696 [2024-07-22 10:47:49.124677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:43.696 [2024-07-22 10:47:49.124686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:43.696 [2024-07-22 10:47:49.124722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6f50 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.124733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07480 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.124742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e90690 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.125145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.125160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d05ca0 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.125168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d05ca0 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.125643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.125680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cffb70 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.125691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cffb70 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.126056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.696 [2024-07-22 10:47:49.126068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeb9b0 with addr=10.0.0.2, port=4420 00:31:43.696 [2024-07-22 10:47:49.126075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeb9b0 is same with the state(5) to be set 00:31:43.696 [2024-07-22 10:47:49.126083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.696 [2024-07-22 10:47:49.126219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.696 [2024-07-22 10:47:49.126225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.696 [2024-07-22 10:47:49.126234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05ca0 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.126244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cffb70 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.126253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeb9b0 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.126275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee49b0 (9): Bad file descriptor 00:31:43.696 [2024-07-22 10:47:49.126337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:43.696 [2024-07-22 10:47:49.126403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:43.696 [2024-07-22 10:47:49.126410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:43.696 [2024-07-22 10:47:49.126453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.696 [2024-07-22 10:47:49.126585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.696 [2024-07-22 10:47:49.126594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.126987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.126994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.697 [2024-07-22 10:47:49.127294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.697 [2024-07-22 10:47:49.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.127513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.127521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db3970 is same with the state(5) to be set 00:31:43.698 [2024-07-22 10:47:49.128808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.128985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.698 [2024-07-22 10:47:49.128992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.698 [2024-07-22 10:47:49.129001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.699 [2024-07-22 10:47:49.129378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.699 [2024-07-22 10:47:49.129385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.129868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.129876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4e30 is same with the state(5) to be set 00:31:43.700 [2024-07-22 10:47:49.131136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.700 [2024-07-22 10:47:49.131353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.700 [2024-07-22 10:47:49.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.131985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.131994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.132001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.132011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.132018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.132027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.132034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.132043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.132050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.132060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.701 [2024-07-22 10:47:49.132067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.701 [2024-07-22 10:47:49.132077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.132209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7cb00 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.133456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.133467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.133475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.133483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:43.702 [2024-07-22 10:47:49.133493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:43.702 [2024-07-22 10:47:49.133594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:43.702 [2024-07-22 10:47:49.133944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.133957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea4910 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.133966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4910 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.134177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.134186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f7610 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.134197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7610 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.134980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:43.702 [2024-07-22 10:47:49.134993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:43.702 [2024-07-22 10:47:49.135002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:43.702 [2024-07-22 10:47:49.135233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.135244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9e420 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.135251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9e420 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.135261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea4910 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.135270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7610 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.135698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.135712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e90690 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.135719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e90690 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.135929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.135938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d07480 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.135945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d07480 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.136251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.136260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c6f50 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.136267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6f50 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.136275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9e420 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.136284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.136290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.136299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:43.702 [2024-07-22 10:47:49.136309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.136315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.136321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:43.702 [2024-07-22 10:47:49.136363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:43.702 [2024-07-22 10:47:49.136372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:43.702 [2024-07-22 10:47:49.136380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:43.702 [2024-07-22 10:47:49.136388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.136398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.136424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e90690 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.136434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07480 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.136443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6f50 (9): Bad file descriptor 00:31:43.702 [2024-07-22 10:47:49.136451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.136457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.136463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:43.702 [2024-07-22 10:47:49.136496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.702 [2024-07-22 10:47:49.136847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.136858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeb9b0 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.136866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeb9b0 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.137207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.137218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cffb70 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.137226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cffb70 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.137414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.702 [2024-07-22 10:47:49.137425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d05ca0 with addr=10.0.0.2, port=4420 00:31:43.702 [2024-07-22 10:47:49.137432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d05ca0 is same with the state(5) to be set 00:31:43.702 [2024-07-22 10:47:49.137439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.137445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.137452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:43.702 [2024-07-22 10:47:49.137462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.137468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.137474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:43.702 [2024-07-22 10:47:49.137483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:43.702 [2024-07-22 10:47:49.137489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:43.702 [2024-07-22 10:47:49.137496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:43.702 [2024-07-22 10:47:49.137535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.137555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.137574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.137591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.137607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.702 [2024-07-22 10:47:49.137623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.702 [2024-07-22 10:47:49.137630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.137985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.137992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.703 [2024-07-22 10:47:49.138249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.703 [2024-07-22 10:47:49.138256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.704 [2024-07-22 10:47:49.138590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.704 [2024-07-22 10:47:49.138597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8ea0 is same with the state(5) to be set 00:31:43.704 [2024-07-22 10:47:49.141235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.704 [2024-07-22 10:47:49.141256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.704 [2024-07-22 10:47:49.141262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.704 task offset: 26752 on job bdev=Nvme10n1 fails 00:31:43.704 00:31:43.704 Latency(us) 00:31:43.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme1n1 ended in about 0.95 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme1n1 : 0.95 140.65 8.79 67.68 0.00 303840.86 32331.09 237677.23 00:31:43.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme2n1 ended in about 0.93 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme2n1 : 0.93 205.42 12.84 68.47 0.00 226308.91 21736.11 246415.36 00:31:43.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme3n1 ended in about 0.94 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme3n1 : 0.94 205.17 12.82 68.39 0.00 221860.91 15182.51 249910.61 00:31:43.704 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme4n1 ended in about 0.95 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme4n1 : 0.95 202.55 12.66 67.52 0.00 220115.41 15291.73 249910.61 00:31:43.704 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme5n1 ended in about 0.95 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme5n1 : 0.95 134.70 8.42 67.35 0.00 288055.18 36481.71 244667.73 00:31:43.704 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme6n1 ended in about 0.96 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme6n1 : 0.96 200.70 12.54 66.90 0.00 212793.07 11086.51 248162.99 00:31:43.704 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme7n1 ended in about 0.96 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme7n1 : 0.96 133.48 8.34 66.74 0.00 278356.76 15400.96 246415.36 00:31:43.704 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme8n1 ended in about 0.96 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme8n1 : 0.96 199.73 12.48 66.58 0.00 204657.49 9994.24 237677.23 00:31:43.704 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme9n1 ended in about 0.97 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme9n1 : 0.97 132.27 8.27 66.14 0.00 268954.45 19770.03 267386.88 00:31:43.704 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.704 Job: Nvme10n1 ended in about 0.93 seconds with error 00:31:43.704 Verification LBA range: start 0x0 length 0x400 00:31:43.704 Nvme10n1 : 0.93 205.78 12.86 68.59 0.00 188307.41 19988.48 223696.21 00:31:43.704 =================================================================================================================== 00:31:43.704 Total : 1760.46 110.03 674.36 0.00 236640.13 9994.24 267386.88 00:31:43.704 [2024-07-22 10:47:49.169362] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:43.704 [2024-07-22 10:47:49.169399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:43.704 [2024-07-22 10:47:49.169431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeb9b0 (9): Bad file descriptor 00:31:43.704 [2024-07-22 10:47:49.169450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cffb70 (9): Bad file descriptor 00:31:43.704 [2024-07-22 10:47:49.169460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05ca0 (9): Bad file descriptor 00:31:43.704 [2024-07-22 10:47:49.169789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.704 [2024-07-22 10:47:49.169806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee49b0 with addr=10.0.0.2, port=4420 00:31:43.704 [2024-07-22 10:47:49.169815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee49b0 is same with the state(5) to be set 00:31:43.704 [2024-07-22 10:47:49.169823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:43.704 [2024-07-22 10:47:49.169829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:43.704 [2024-07-22 10:47:49.169837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:43.704 [2024-07-22 10:47:49.169849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:43.704 [2024-07-22 10:47:49.169856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:43.704 [2024-07-22 10:47:49.169862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:43.704 [2024-07-22 10:47:49.169873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:43.704 [2024-07-22 10:47:49.169879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:43.704 [2024-07-22 10:47:49.169886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:43.704 [2024-07-22 10:47:49.169945] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:43.705 [2024-07-22 10:47:49.169958] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:43.705 [2024-07-22 10:47:49.169968] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:43.705 [2024-07-22 10:47:49.170256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.170266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.170272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.170292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee49b0 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.170571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.170639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.170646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:43.705 [2024-07-22 10:47:49.170678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:43.705 [2024-07-22 10:47:49.170707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.171047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.171059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f7610 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.171067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7610 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.171248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.171260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea4910 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.171267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4910 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.171689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.171699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9e420 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.171706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9e420 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.172023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.172033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c6f50 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.172040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6f50 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.172413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.172425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d07480 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.172432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d07480 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.172494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.705 [2024-07-22 10:47:49.172502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e90690 with addr=10.0.0.2, port=4420 00:31:43.705 [2024-07-22 10:47:49.172509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e90690 is same with the state(5) to be set 00:31:43.705 [2024-07-22 10:47:49.172519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7610 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea4910 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9e420 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6f50 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07480 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e90690 (9): Bad file descriptor 00:31:43.705 [2024-07-22 10:47:49.172591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.172707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.172714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.172720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.172726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:43.705 [2024-07-22 10:47:49.172755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:43.705 [2024-07-22 10:47:49.172762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:43.705 [2024-07-22 10:47:49.172788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 [2024-07-22 10:47:49.172795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.705 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:31:43.705 10:47:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2129575 00:31:45.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2129575) - No such process 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:45.089 rmmod nvme_tcp 00:31:45.089 rmmod nvme_fabrics 00:31:45.089 rmmod nvme_keyring 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.089 10:47:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:47.004 00:31:47.004 real 0m7.594s 00:31:47.004 user 0m18.278s 00:31:47.004 sys 0m1.186s 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:47.004 ************************************ 00:31:47.004 END TEST nvmf_shutdown_tc3 00:31:47.004 ************************************ 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:31:47.004 00:31:47.004 real 0m33.140s 00:31:47.004 user 1m15.997s 00:31:47.004 sys 0m9.871s 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.004 10:47:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:47.004 ************************************ 00:31:47.004 END TEST nvmf_shutdown 00:31:47.004 ************************************ 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:47.004 10:47:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.004 10:47:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.004 10:47:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:31:47.004 10:47:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.004 10:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.004 ************************************ 00:31:47.004 START TEST nvmf_multicontroller 00:31:47.004 ************************************ 00:31:47.004 10:47:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:47.265 * Looking for test storage... 00:31:47.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.265 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:31:47.266 10:47:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:55.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:55.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:55.436 Found net devices under 0000:31:00.0: cvl_0_0 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:55.436 Found net devices under 0000:31:00.1: cvl_0_1 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.436 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:55.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:31:55.437 00:31:55.437 --- 10.0.0.2 ping statistics --- 00:31:55.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.437 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:31:55.437 00:31:55.437 --- 10.0.0.1 ping statistics --- 00:31:55.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.437 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2135035 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2135035 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2135035 ']' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:55.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.437 10:48:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.437 [2024-07-22 10:48:01.000919] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:55.437 [2024-07-22 10:48:01.000966] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.437 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.437 [2024-07-22 10:48:01.091642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.437 [2024-07-22 10:48:01.122747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.437 [2024-07-22 10:48:01.122783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.437 [2024-07-22 10:48:01.122790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.437 [2024-07-22 10:48:01.122796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.437 [2024-07-22 10:48:01.122802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.437 [2024-07-22 10:48:01.126408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.437 [2024-07-22 10:48:01.126564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.437 [2024-07-22 10:48:01.126661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 [2024-07-22 10:48:01.819942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 Malloc0 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 [2024-07-22 10:48:01.883832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.379 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 [2024-07-22 10:48:01.895752] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.380 Malloc1 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2135283 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2135283 /var/tmp/bdevperf.sock 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2135283 ']' 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:56.380 10:48:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 NVMe0n1 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.324 1 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.324 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 request: 00:31:57.324 { 00:31:57.324 "name": "NVMe0", 00:31:57.324 "trtype": "tcp", 00:31:57.324 "traddr": "10.0.0.2", 00:31:57.324 "adrfam": "ipv4", 00:31:57.324 "trsvcid": "4420", 00:31:57.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.324 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:57.324 "hostaddr": "10.0.0.2", 00:31:57.324 "hostsvcid": "60000", 00:31:57.324 "prchk_reftag": false, 00:31:57.324 "prchk_guard": false, 00:31:57.324 "hdgst": false, 00:31:57.324 "ddgst": false, 00:31:57.324 "method": "bdev_nvme_attach_controller", 00:31:57.324 "req_id": 1 00:31:57.324 } 00:31:57.324 Got JSON-RPC error response 00:31:57.324 response: 00:31:57.324 { 00:31:57.324 "code": -114, 00:31:57.324 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:57.324 } 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.325 request: 00:31:57.325 { 00:31:57.325 "name": "NVMe0", 00:31:57.325 "trtype": "tcp", 00:31:57.325 "traddr": "10.0.0.2", 00:31:57.325 "adrfam": "ipv4", 00:31:57.325 "trsvcid": "4420", 00:31:57.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:57.325 "hostaddr": "10.0.0.2", 00:31:57.325 "hostsvcid": "60000", 00:31:57.325 "prchk_reftag": false, 00:31:57.325 "prchk_guard": false, 00:31:57.325 "hdgst": false, 00:31:57.325 "ddgst": false, 00:31:57.325 "method": "bdev_nvme_attach_controller", 00:31:57.325 "req_id": 1 00:31:57.325 } 00:31:57.325 Got JSON-RPC error response 00:31:57.325 response: 00:31:57.325 { 00:31:57.325 "code": -114, 00:31:57.325 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:57.325 } 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.325 request: 00:31:57.325 { 00:31:57.325 "name": "NVMe0", 00:31:57.325 "trtype": "tcp", 00:31:57.325 "traddr": "10.0.0.2", 00:31:57.325 "adrfam": "ipv4", 00:31:57.325 "trsvcid": "4420", 00:31:57.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.325 "hostaddr": "10.0.0.2", 00:31:57.325 "hostsvcid": "60000", 00:31:57.325 "prchk_reftag": false, 00:31:57.325 "prchk_guard": false, 00:31:57.325 "hdgst": false, 00:31:57.325 "ddgst": false, 00:31:57.325 "multipath": "disable", 00:31:57.325 "method": "bdev_nvme_attach_controller", 00:31:57.325 "req_id": 1 00:31:57.325 } 00:31:57.325 Got JSON-RPC error response 00:31:57.325 response: 00:31:57.325 { 00:31:57.325 "code": -114, 00:31:57.325 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:31:57.325 } 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.325 request: 00:31:57.325 { 00:31:57.325 "name": "NVMe0", 00:31:57.325 "trtype": "tcp", 00:31:57.325 "traddr": "10.0.0.2", 00:31:57.325 "adrfam": "ipv4", 00:31:57.325 "trsvcid": "4420", 00:31:57.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.325 "hostaddr": "10.0.0.2", 00:31:57.325 "hostsvcid": "60000", 00:31:57.325 "prchk_reftag": false, 00:31:57.325 "prchk_guard": false, 00:31:57.325 "hdgst": false, 00:31:57.325 "ddgst": false, 00:31:57.325 "multipath": "failover", 00:31:57.325 "method": "bdev_nvme_attach_controller", 00:31:57.325 "req_id": 1 00:31:57.325 } 00:31:57.325 Got JSON-RPC error response 00:31:57.325 response: 00:31:57.325 { 00:31:57.325 "code": -114, 00:31:57.325 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:57.325 } 00:31:57.325 10:48:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.325 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.586 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.586 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.847 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:57.847 10:48:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:58.788 0 00:31:58.788 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:58.788 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.788 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2135283 ']' 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135283' 00:31:59.048 killing process with pid 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2135283 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:59.048 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:59.048 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:59.048 [2024-07-22 10:48:02.014127] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:31:59.048 [2024-07-22 10:48:02.014185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135283 ] 00:31:59.048 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.048 [2024-07-22 10:48:02.078845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.048 [2024-07-22 10:48:02.110105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.048 [2024-07-22 10:48:03.354597] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name ce3dd4e9-c9b4-47e8-a45b-04c0e4255161 already exists 00:31:59.048 [2024-07-22 10:48:03.354629] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:ce3dd4e9-c9b4-47e8-a45b-04c0e4255161 alias for bdev NVMe1n1 00:31:59.048 [2024-07-22 10:48:03.354637] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:59.048 Running I/O for 1 seconds... 00:31:59.048 00:31:59.048 Latency(us) 00:31:59.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.048 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:59.048 NVMe0n1 : 1.00 27231.02 106.37 0.00 0.00 4685.57 2348.37 16274.77 00:31:59.048 =================================================================================================================== 00:31:59.048 Total : 27231.02 106.37 0.00 0.00 4685.57 2348.37 16274.77 00:31:59.048 Received shutdown signal, test time was about 1.000000 seconds 00:31:59.048 00:31:59.048 Latency(us) 00:31:59.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.048 =================================================================================================================== 00:31:59.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.049 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.049 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:59.049 rmmod nvme_tcp 00:31:59.309 rmmod nvme_fabrics 00:31:59.309 rmmod nvme_keyring 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2135035 ']' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2135035 ']' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135035' 00:31:59.309 killing process with pid 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2135035 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.309 10:48:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.850 10:48:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:01.850 00:32:01.850 real 0m14.381s 00:32:01.850 user 0m16.833s 00:32:01.850 sys 0m6.733s 00:32:01.850 10:48:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.850 10:48:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:01.850 ************************************ 00:32:01.850 END TEST nvmf_multicontroller 00:32:01.850 ************************************ 00:32:01.850 10:48:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:01.850 10:48:07 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:01.850 10:48:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:01.850 10:48:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.850 10:48:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:01.850 ************************************ 00:32:01.850 START TEST nvmf_aer 00:32:01.850 ************************************ 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:01.850 * Looking for test storage... 00:32:01.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.850 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:32:01.851 10:48:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:09.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:09.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:09.980 Found net devices under 0000:31:00.0: cvl_0_0 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:09.980 Found net devices under 0000:31:00.1: cvl_0_1 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:09.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:32:09.980 00:32:09.980 --- 10.0.0.2 ping statistics --- 00:32:09.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.980 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:32:09.980 00:32:09.980 --- 10.0.0.1 ping statistics --- 00:32:09.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.980 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2140925 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2140925 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2140925 ']' 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.980 10:48:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:09.980 [2024-07-22 10:48:15.500891] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:09.980 [2024-07-22 10:48:15.500955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.980 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.980 [2024-07-22 10:48:15.579007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.980 [2024-07-22 10:48:15.619147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.980 [2024-07-22 10:48:15.619186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.980 [2024-07-22 10:48:15.619195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.980 [2024-07-22 10:48:15.619201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.980 [2024-07-22 10:48:15.619207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.980 [2024-07-22 10:48:15.619350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.980 [2024-07-22 10:48:15.619506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.980 [2024-07-22 10:48:15.619789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.980 [2024-07-22 10:48:15.619790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 [2024-07-22 10:48:16.332147] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 Malloc0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 [2024-07-22 10:48:16.391497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.921 [ 00:32:10.921 { 00:32:10.921 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:10.921 "subtype": "Discovery", 00:32:10.921 "listen_addresses": [], 00:32:10.921 "allow_any_host": true, 00:32:10.921 "hosts": [] 00:32:10.921 }, 00:32:10.921 { 00:32:10.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.921 "subtype": "NVMe", 00:32:10.921 "listen_addresses": [ 00:32:10.921 { 00:32:10.921 "trtype": "TCP", 00:32:10.921 "adrfam": "IPv4", 00:32:10.921 "traddr": "10.0.0.2", 00:32:10.921 "trsvcid": "4420" 00:32:10.921 } 00:32:10.921 ], 00:32:10.921 "allow_any_host": true, 00:32:10.921 "hosts": [], 00:32:10.921 "serial_number": "SPDK00000000000001", 00:32:10.921 "model_number": "SPDK bdev Controller", 00:32:10.921 "max_namespaces": 2, 00:32:10.921 "min_cntlid": 1, 00:32:10.921 "max_cntlid": 65519, 00:32:10.921 "namespaces": [ 00:32:10.921 { 00:32:10.921 "nsid": 1, 00:32:10.921 "bdev_name": "Malloc0", 00:32:10.921 "name": "Malloc0", 00:32:10.921 "nguid": "48B8AB035E7740458DD0F667B0412A97", 00:32:10.921 "uuid": "48b8ab03-5e77-4045-8dd0-f667b0412a97" 00:32:10.921 } 00:32:10.921 ] 00:32:10.921 } 00:32:10.921 ] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2141103 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.921 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:10.922 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:32:10.922 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 Malloc1 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 Asynchronous Event Request test 00:32:11.182 Attaching to 10.0.0.2 00:32:11.182 Attached to 10.0.0.2 00:32:11.182 Registering asynchronous event callbacks... 00:32:11.182 Starting namespace attribute notice tests for all controllers... 00:32:11.182 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:11.182 aer_cb - Changed Namespace 00:32:11.182 Cleaning up... 00:32:11.182 [ 00:32:11.182 { 00:32:11.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:11.182 "subtype": "Discovery", 00:32:11.182 "listen_addresses": [], 00:32:11.182 "allow_any_host": true, 00:32:11.182 "hosts": [] 00:32:11.182 }, 00:32:11.182 { 00:32:11.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.182 "subtype": "NVMe", 00:32:11.182 "listen_addresses": [ 00:32:11.182 { 00:32:11.182 "trtype": "TCP", 00:32:11.182 "adrfam": "IPv4", 00:32:11.182 "traddr": "10.0.0.2", 00:32:11.182 "trsvcid": "4420" 00:32:11.182 } 00:32:11.182 ], 00:32:11.182 "allow_any_host": true, 00:32:11.182 "hosts": [], 00:32:11.182 "serial_number": "SPDK00000000000001", 00:32:11.182 "model_number": "SPDK bdev Controller", 00:32:11.182 "max_namespaces": 2, 00:32:11.182 "min_cntlid": 1, 00:32:11.182 "max_cntlid": 65519, 00:32:11.182 "namespaces": [ 00:32:11.182 { 00:32:11.182 "nsid": 1, 00:32:11.182 "bdev_name": "Malloc0", 00:32:11.182 "name": "Malloc0", 00:32:11.182 "nguid": "48B8AB035E7740458DD0F667B0412A97", 00:32:11.182 "uuid": "48b8ab03-5e77-4045-8dd0-f667b0412a97" 00:32:11.182 }, 00:32:11.182 { 00:32:11.182 "nsid": 2, 00:32:11.182 "bdev_name": "Malloc1", 00:32:11.182 "name": "Malloc1", 00:32:11.182 "nguid": "AA7F4AE2640A432DA9465D356CB13361", 00:32:11.182 "uuid": "aa7f4ae2-640a-432d-a946-5d356cb13361" 00:32:11.182 } 00:32:11.182 ] 00:32:11.182 } 00:32:11.182 ] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2141103 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.182 rmmod nvme_tcp 00:32:11.182 rmmod nvme_fabrics 00:32:11.182 rmmod nvme_keyring 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2140925 ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2140925 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2140925 ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2140925 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2140925 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2140925' 00:32:11.182 killing process with pid 2140925 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2140925 00:32:11.182 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2140925 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.442 10:48:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.352 10:48:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:13.612 00:32:13.612 real 0m11.909s 00:32:13.612 user 0m7.888s 00:32:13.612 sys 0m6.367s 00:32:13.612 10:48:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:13.612 10:48:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:13.612 ************************************ 00:32:13.612 END TEST nvmf_aer 00:32:13.612 ************************************ 00:32:13.612 10:48:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:13.612 10:48:19 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:13.612 10:48:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:13.612 10:48:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.612 10:48:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.612 ************************************ 00:32:13.612 START TEST nvmf_async_init 00:32:13.612 ************************************ 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:13.612 * Looking for test storage... 00:32:13.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.612 10:48:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=863bd3cd5f2e49e5b155bf6e10f7bff8 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:32:13.613 10:48:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:32:21.749 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:21.750 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:21.750 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:21.750 Found net devices under 0000:31:00.0: cvl_0_0 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:21.750 Found net devices under 0000:31:00.1: cvl_0_1 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.750 10:48:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:21.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:32:21.750 00:32:21.750 --- 10.0.0.2 ping statistics --- 00:32:21.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.750 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:21.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:32:21.750 00:32:21.750 --- 10.0.0.1 ping statistics --- 00:32:21.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.750 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2145892 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2145892 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2145892 ']' 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:21.750 10:48:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.750 [2024-07-22 10:48:27.375999] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:21.750 [2024-07-22 10:48:27.376048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.750 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.011 [2024-07-22 10:48:27.449620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.011 [2024-07-22 10:48:27.480148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.011 [2024-07-22 10:48:27.480187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.011 [2024-07-22 10:48:27.480194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.011 [2024-07-22 10:48:27.480201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.011 [2024-07-22 10:48:27.480207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.011 [2024-07-22 10:48:27.480231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 [2024-07-22 10:48:28.192489] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 null0 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 863bd3cd5f2e49e5b155bf6e10f7bff8 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 [2024-07-22 10:48:28.232717] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.867 nvme0n1 00:32:22.867 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.867 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:22.867 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.867 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.867 [ 00:32:22.867 { 00:32:22.867 "name": "nvme0n1", 00:32:22.867 "aliases": [ 00:32:22.867 "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8" 00:32:22.867 ], 00:32:22.867 "product_name": "NVMe disk", 00:32:22.867 "block_size": 512, 00:32:22.867 "num_blocks": 2097152, 00:32:22.868 "uuid": "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8", 00:32:22.868 "assigned_rate_limits": { 00:32:22.868 "rw_ios_per_sec": 0, 00:32:22.868 "rw_mbytes_per_sec": 0, 00:32:22.868 "r_mbytes_per_sec": 0, 00:32:22.868 "w_mbytes_per_sec": 0 00:32:22.868 }, 00:32:22.868 "claimed": false, 00:32:22.868 "zoned": false, 00:32:22.868 "supported_io_types": { 00:32:22.868 "read": true, 00:32:22.868 "write": true, 00:32:22.868 "unmap": false, 00:32:22.868 "flush": true, 00:32:22.868 "reset": true, 00:32:22.868 "nvme_admin": true, 00:32:22.868 "nvme_io": true, 00:32:22.868 "nvme_io_md": false, 00:32:22.868 "write_zeroes": true, 00:32:22.868 "zcopy": false, 00:32:22.868 "get_zone_info": false, 00:32:22.868 "zone_management": false, 00:32:22.868 "zone_append": false, 00:32:22.868 "compare": true, 00:32:22.868 "compare_and_write": true, 00:32:22.868 "abort": true, 00:32:22.868 "seek_hole": false, 00:32:22.868 "seek_data": false, 00:32:22.868 "copy": true, 00:32:22.868 "nvme_iov_md": false 00:32:22.868 }, 00:32:22.868 "memory_domains": [ 00:32:22.868 { 00:32:22.868 "dma_device_id": "system", 00:32:22.868 "dma_device_type": 1 00:32:22.868 } 00:32:22.868 ], 00:32:22.868 "driver_specific": { 00:32:22.868 "nvme": [ 00:32:22.868 { 00:32:22.868 "trid": { 00:32:22.868 "trtype": "TCP", 00:32:22.868 "adrfam": "IPv4", 00:32:22.868 "traddr": "10.0.0.2", 00:32:22.868 "trsvcid": "4420", 00:32:22.868 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:22.868 }, 00:32:22.868 "ctrlr_data": { 00:32:22.868 "cntlid": 1, 00:32:22.868 "vendor_id": "0x8086", 00:32:22.868 "model_number": "SPDK bdev Controller", 00:32:22.868 "serial_number": "00000000000000000000", 00:32:22.868 "firmware_revision": "24.09", 00:32:22.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.868 "oacs": { 00:32:22.868 "security": 0, 00:32:22.868 "format": 0, 00:32:22.868 "firmware": 0, 00:32:22.868 "ns_manage": 0 00:32:22.868 }, 00:32:22.868 "multi_ctrlr": true, 00:32:22.868 "ana_reporting": false 00:32:22.868 }, 00:32:22.868 "vs": { 00:32:22.868 "nvme_version": "1.3" 00:32:22.868 }, 00:32:22.868 "ns_data": { 00:32:22.868 "id": 1, 00:32:22.868 "can_share": true 00:32:22.868 } 00:32:22.868 } 00:32:22.868 ], 00:32:22.868 "mp_policy": "active_passive" 00:32:22.868 } 00:32:22.868 } 00:32:22.868 ] 00:32:22.868 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.868 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:22.868 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.868 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.868 [2024-07-22 10:48:28.481437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.868 [2024-07-22 10:48:28.481500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ce00 (9): Bad file descriptor 00:32:23.167 [2024-07-22 10:48:28.613494] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 [ 00:32:23.167 { 00:32:23.167 "name": "nvme0n1", 00:32:23.167 "aliases": [ 00:32:23.167 "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8" 00:32:23.167 ], 00:32:23.167 "product_name": "NVMe disk", 00:32:23.167 "block_size": 512, 00:32:23.167 "num_blocks": 2097152, 00:32:23.167 "uuid": "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8", 00:32:23.167 "assigned_rate_limits": { 00:32:23.167 "rw_ios_per_sec": 0, 00:32:23.167 "rw_mbytes_per_sec": 0, 00:32:23.167 "r_mbytes_per_sec": 0, 00:32:23.167 "w_mbytes_per_sec": 0 00:32:23.167 }, 00:32:23.167 "claimed": false, 00:32:23.167 "zoned": false, 00:32:23.167 "supported_io_types": { 00:32:23.167 "read": true, 00:32:23.167 "write": true, 00:32:23.167 "unmap": false, 00:32:23.167 "flush": true, 00:32:23.167 "reset": true, 00:32:23.167 "nvme_admin": true, 00:32:23.167 "nvme_io": true, 00:32:23.167 "nvme_io_md": false, 00:32:23.167 "write_zeroes": true, 00:32:23.167 "zcopy": false, 00:32:23.167 "get_zone_info": false, 00:32:23.167 "zone_management": false, 00:32:23.167 "zone_append": false, 00:32:23.167 "compare": true, 00:32:23.167 "compare_and_write": true, 00:32:23.167 "abort": true, 00:32:23.167 "seek_hole": false, 00:32:23.167 "seek_data": false, 00:32:23.167 "copy": true, 00:32:23.167 "nvme_iov_md": false 00:32:23.167 }, 00:32:23.167 "memory_domains": [ 00:32:23.167 { 00:32:23.167 "dma_device_id": "system", 00:32:23.167 "dma_device_type": 1 00:32:23.167 } 00:32:23.167 ], 00:32:23.167 "driver_specific": { 00:32:23.167 "nvme": [ 00:32:23.167 { 00:32:23.167 "trid": { 00:32:23.167 "trtype": "TCP", 00:32:23.167 "adrfam": "IPv4", 00:32:23.167 "traddr": "10.0.0.2", 00:32:23.167 "trsvcid": "4420", 00:32:23.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:23.167 }, 00:32:23.167 "ctrlr_data": { 00:32:23.167 "cntlid": 2, 00:32:23.167 "vendor_id": "0x8086", 00:32:23.167 "model_number": "SPDK bdev Controller", 00:32:23.167 "serial_number": "00000000000000000000", 00:32:23.167 "firmware_revision": "24.09", 00:32:23.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.167 "oacs": { 00:32:23.167 "security": 0, 00:32:23.167 "format": 0, 00:32:23.167 "firmware": 0, 00:32:23.167 "ns_manage": 0 00:32:23.167 }, 00:32:23.167 "multi_ctrlr": true, 00:32:23.167 "ana_reporting": false 00:32:23.167 }, 00:32:23.167 "vs": { 00:32:23.167 "nvme_version": "1.3" 00:32:23.167 }, 00:32:23.167 "ns_data": { 00:32:23.167 "id": 1, 00:32:23.167 "can_share": true 00:32:23.167 } 00:32:23.167 } 00:32:23.167 ], 00:32:23.167 "mp_policy": "active_passive" 00:32:23.167 } 00:32:23.167 } 00:32:23.167 ] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9pGHh2JpcU 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9pGHh2JpcU 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 [2024-07-22 10:48:28.682060] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:23.167 [2024-07-22 10:48:28.682178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pGHh2JpcU 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 [2024-07-22 10:48:28.694082] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pGHh2JpcU 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 [2024-07-22 10:48:28.706133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:23.167 [2024-07-22 10:48:28.706169] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:23.167 nvme0n1 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.167 [ 00:32:23.167 { 00:32:23.167 "name": "nvme0n1", 00:32:23.167 "aliases": [ 00:32:23.167 "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8" 00:32:23.167 ], 00:32:23.167 "product_name": "NVMe disk", 00:32:23.167 "block_size": 512, 00:32:23.167 "num_blocks": 2097152, 00:32:23.167 "uuid": "863bd3cd-5f2e-49e5-b155-bf6e10f7bff8", 00:32:23.167 "assigned_rate_limits": { 00:32:23.167 "rw_ios_per_sec": 0, 00:32:23.167 "rw_mbytes_per_sec": 0, 00:32:23.167 "r_mbytes_per_sec": 0, 00:32:23.167 "w_mbytes_per_sec": 0 00:32:23.167 }, 00:32:23.167 "claimed": false, 00:32:23.167 "zoned": false, 00:32:23.167 "supported_io_types": { 00:32:23.167 "read": true, 00:32:23.167 "write": true, 00:32:23.167 "unmap": false, 00:32:23.167 "flush": true, 00:32:23.167 "reset": true, 00:32:23.167 "nvme_admin": true, 00:32:23.167 "nvme_io": true, 00:32:23.167 "nvme_io_md": false, 00:32:23.167 "write_zeroes": true, 00:32:23.167 "zcopy": false, 00:32:23.167 "get_zone_info": false, 00:32:23.167 "zone_management": false, 00:32:23.167 "zone_append": false, 00:32:23.167 "compare": true, 00:32:23.167 "compare_and_write": true, 00:32:23.167 "abort": true, 00:32:23.167 "seek_hole": false, 00:32:23.167 "seek_data": false, 00:32:23.167 "copy": true, 00:32:23.167 "nvme_iov_md": false 00:32:23.167 }, 00:32:23.167 "memory_domains": [ 00:32:23.167 { 00:32:23.167 "dma_device_id": "system", 00:32:23.167 "dma_device_type": 1 00:32:23.167 } 00:32:23.167 ], 00:32:23.167 "driver_specific": { 00:32:23.167 "nvme": [ 00:32:23.167 { 00:32:23.167 "trid": { 00:32:23.167 "trtype": "TCP", 00:32:23.167 "adrfam": "IPv4", 00:32:23.167 "traddr": "10.0.0.2", 00:32:23.167 "trsvcid": "4421", 00:32:23.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:23.167 }, 00:32:23.167 "ctrlr_data": { 00:32:23.167 "cntlid": 3, 00:32:23.167 "vendor_id": "0x8086", 00:32:23.167 "model_number": "SPDK bdev Controller", 00:32:23.167 "serial_number": "00000000000000000000", 00:32:23.167 "firmware_revision": "24.09", 00:32:23.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.167 "oacs": { 00:32:23.167 "security": 0, 00:32:23.167 "format": 0, 00:32:23.167 "firmware": 0, 00:32:23.167 "ns_manage": 0 00:32:23.167 }, 00:32:23.167 "multi_ctrlr": true, 00:32:23.167 "ana_reporting": false 00:32:23.167 }, 00:32:23.167 "vs": { 00:32:23.167 "nvme_version": "1.3" 00:32:23.167 }, 00:32:23.167 "ns_data": { 00:32:23.167 "id": 1, 00:32:23.167 "can_share": true 00:32:23.167 } 00:32:23.167 } 00:32:23.167 ], 00:32:23.167 "mp_policy": "active_passive" 00:32:23.167 } 00:32:23.167 } 00:32:23.167 ] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.167 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9pGHh2JpcU 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.168 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:23.168 rmmod nvme_tcp 00:32:23.444 rmmod nvme_fabrics 00:32:23.444 rmmod nvme_keyring 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2145892 ']' 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2145892 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2145892 ']' 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2145892 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2145892 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2145892' 00:32:23.444 killing process with pid 2145892 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2145892 00:32:23.444 [2024-07-22 10:48:28.967788] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:23.444 [2024-07-22 10:48:28.967814] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:23.444 10:48:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2145892 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.444 10:48:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.989 10:48:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:25.989 00:32:25.989 real 0m12.019s 00:32:25.989 user 0m4.237s 00:32:25.989 sys 0m6.209s 00:32:25.989 10:48:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:25.989 10:48:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:25.989 ************************************ 00:32:25.989 END TEST nvmf_async_init 00:32:25.989 ************************************ 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:25.989 10:48:31 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.989 ************************************ 00:32:25.989 START TEST dma 00:32:25.989 ************************************ 00:32:25.989 10:48:31 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:25.989 * Looking for test storage... 00:32:25.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.989 10:48:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.989 10:48:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.989 10:48:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.989 10:48:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.989 10:48:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.989 10:48:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.989 10:48:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.989 10:48:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:32:25.989 10:48:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:25.989 10:48:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:25.989 10:48:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:25.989 10:48:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:32:25.989 00:32:25.989 real 0m0.115s 00:32:25.989 user 0m0.047s 00:32:25.989 sys 0m0.074s 00:32:25.989 10:48:31 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:25.989 10:48:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:32:25.989 ************************************ 00:32:25.989 END TEST dma 00:32:25.989 ************************************ 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:25.989 10:48:31 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.989 10:48:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.989 ************************************ 00:32:25.989 START TEST nvmf_identify 00:32:25.989 ************************************ 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:25.989 * Looking for test storage... 00:32:25.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.989 10:48:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:32:25.990 10:48:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:34.129 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:34.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:34.129 Found net devices under 0000:31:00.0: cvl_0_0 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:34.129 Found net devices under 0000:31:00.1: cvl_0_1 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:34.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:32:34.129 00:32:34.129 --- 10.0.0.2 ping statistics --- 00:32:34.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.129 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:32:34.129 00:32:34.129 --- 10.0.0.1 ping statistics --- 00:32:34.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.129 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2150878 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2150878 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2150878 ']' 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.129 10:48:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.129 [2024-07-22 10:48:39.754580] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:34.129 [2024-07-22 10:48:39.754629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.129 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.129 [2024-07-22 10:48:39.826363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.388 [2024-07-22 10:48:39.859462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.388 [2024-07-22 10:48:39.859498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.388 [2024-07-22 10:48:39.859505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.388 [2024-07-22 10:48:39.859512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.388 [2024-07-22 10:48:39.859517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.388 [2024-07-22 10:48:39.859567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.388 [2024-07-22 10:48:39.859652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.388 [2024-07-22 10:48:39.859795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.388 [2024-07-22 10:48:39.859796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 [2024-07-22 10:48:40.538043] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 Malloc0 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 [2024-07-22 10:48:40.630967] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.956 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:35.218 [ 00:32:35.218 { 00:32:35.218 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:35.218 "subtype": "Discovery", 00:32:35.218 "listen_addresses": [ 00:32:35.218 { 00:32:35.218 "trtype": "TCP", 00:32:35.218 "adrfam": "IPv4", 00:32:35.218 "traddr": "10.0.0.2", 00:32:35.218 "trsvcid": "4420" 00:32:35.218 } 00:32:35.218 ], 00:32:35.218 "allow_any_host": true, 00:32:35.218 "hosts": [] 00:32:35.218 }, 00:32:35.218 { 00:32:35.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.218 "subtype": "NVMe", 00:32:35.218 "listen_addresses": [ 00:32:35.218 { 00:32:35.218 "trtype": "TCP", 00:32:35.218 "adrfam": "IPv4", 00:32:35.218 "traddr": "10.0.0.2", 00:32:35.218 "trsvcid": "4420" 00:32:35.218 } 00:32:35.218 ], 00:32:35.218 "allow_any_host": true, 00:32:35.218 "hosts": [], 00:32:35.218 "serial_number": "SPDK00000000000001", 00:32:35.218 "model_number": "SPDK bdev Controller", 00:32:35.218 "max_namespaces": 32, 00:32:35.218 "min_cntlid": 1, 00:32:35.218 "max_cntlid": 65519, 00:32:35.218 "namespaces": [ 00:32:35.218 { 00:32:35.218 "nsid": 1, 00:32:35.218 "bdev_name": "Malloc0", 00:32:35.218 "name": "Malloc0", 00:32:35.218 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:35.218 "eui64": "ABCDEF0123456789", 00:32:35.218 "uuid": "f4df5587-6b03-4345-94f6-20c7d83d9139" 00:32:35.218 } 00:32:35.218 ] 00:32:35.218 } 00:32:35.218 ] 00:32:35.218 10:48:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.218 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:35.218 [2024-07-22 10:48:40.690915] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:35.218 [2024-07-22 10:48:40.690964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151054 ] 00:32:35.218 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.218 [2024-07-22 10:48:40.726495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:35.218 [2024-07-22 10:48:40.726553] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:35.218 [2024-07-22 10:48:40.726558] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:35.218 [2024-07-22 10:48:40.726572] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:35.218 [2024-07-22 10:48:40.726579] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:35.219 [2024-07-22 10:48:40.726895] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:35.219 [2024-07-22 10:48:40.726929] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8b0560 0 00:32:35.219 [2024-07-22 10:48:40.733402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:35.219 [2024-07-22 10:48:40.733412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:35.219 [2024-07-22 10:48:40.733417] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:35.219 [2024-07-22 10:48:40.733421] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:35.219 [2024-07-22 10:48:40.733456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.733462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.733467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.733480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:35.219 [2024-07-22 10:48:40.733495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.741407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.741416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.741420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.741436] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:35.219 [2024-07-22 10:48:40.741443] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:35.219 [2024-07-22 10:48:40.741448] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:35.219 [2024-07-22 10:48:40.741463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.741479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.741492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.741600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.741606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.741610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.741621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:35.219 [2024-07-22 10:48:40.741629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:35.219 [2024-07-22 10:48:40.741635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.741649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.741660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.741721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.741727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.741733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.741743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:35.219 [2024-07-22 10:48:40.741751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.741757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.741771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.741782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.741843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.741849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.741853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.741862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.741871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.741885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.741895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.741956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.741962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.741966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.741969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.741974] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:35.219 [2024-07-22 10:48:40.741979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.741986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.742092] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:35.219 [2024-07-22 10:48:40.742097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.742105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.742119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.742131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.742192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.742199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.742202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.742211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:35.219 [2024-07-22 10:48:40.742220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.742234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.742244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.742298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.742304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.742308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.742316] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:35.219 [2024-07-22 10:48:40.742321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:35.219 [2024-07-22 10:48:40.742328] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:35.219 [2024-07-22 10:48:40.742342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:35.219 [2024-07-22 10:48:40.742350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.219 [2024-07-22 10:48:40.742361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.219 [2024-07-22 10:48:40.742371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.219 [2024-07-22 10:48:40.742478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.219 [2024-07-22 10:48:40.742485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.219 [2024-07-22 10:48:40.742489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742493] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0560): datao=0, datal=4096, cccid=0 00:32:35.219 [2024-07-22 10:48:40.742498] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x90a240) on tqpair(0x8b0560): expected_datao=0, payload_size=4096 00:32:35.219 [2024-07-22 10:48:40.742503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.742542] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.784456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.219 [2024-07-22 10:48:40.784470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.219 [2024-07-22 10:48:40.784474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.219 [2024-07-22 10:48:40.784481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.219 [2024-07-22 10:48:40.784492] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:35.219 [2024-07-22 10:48:40.784497] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:35.219 [2024-07-22 10:48:40.784502] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:35.220 [2024-07-22 10:48:40.784507] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:35.220 [2024-07-22 10:48:40.784511] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:35.220 [2024-07-22 10:48:40.784516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:35.220 [2024-07-22 10:48:40.784525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:35.220 [2024-07-22 10:48:40.784532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.220 [2024-07-22 10:48:40.784561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.220 [2024-07-22 10:48:40.784639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.784645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.784649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.784660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.220 [2024-07-22 10:48:40.784680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.220 [2024-07-22 10:48:40.784699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.220 [2024-07-22 10:48:40.784718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.220 [2024-07-22 10:48:40.784737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:35.220 [2024-07-22 10:48:40.784748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:35.220 [2024-07-22 10:48:40.784754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.220 [2024-07-22 10:48:40.784776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a240, cid 0, qid 0 00:32:35.220 [2024-07-22 10:48:40.784781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a3c0, cid 1, qid 0 00:32:35.220 [2024-07-22 10:48:40.784785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a540, cid 2, qid 0 00:32:35.220 [2024-07-22 10:48:40.784790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.220 [2024-07-22 10:48:40.784795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a840, cid 4, qid 0 00:32:35.220 [2024-07-22 10:48:40.784890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.784896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.784899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a840) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.784908] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:35.220 [2024-07-22 10:48:40.784913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:35.220 [2024-07-22 10:48:40.784924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.784927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.784934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.220 [2024-07-22 10:48:40.784943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a840, cid 4, qid 0 00:32:35.220 [2024-07-22 10:48:40.785016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.220 [2024-07-22 10:48:40.785023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.220 [2024-07-22 10:48:40.785026] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785030] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0560): datao=0, datal=4096, cccid=4 00:32:35.220 [2024-07-22 10:48:40.785035] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x90a840) on tqpair(0x8b0560): expected_datao=0, payload_size=4096 00:32:35.220 [2024-07-22 10:48:40.785039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785046] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785050] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.785086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.785089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a840) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.785104] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:35.220 [2024-07-22 10:48:40.785125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.785137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.220 [2024-07-22 10:48:40.785144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.785157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.220 [2024-07-22 10:48:40.785170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a840, cid 4, qid 0 00:32:35.220 [2024-07-22 10:48:40.785175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a9c0, cid 5, qid 0 00:32:35.220 [2024-07-22 10:48:40.785275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.220 [2024-07-22 10:48:40.785281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.220 [2024-07-22 10:48:40.785285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0560): datao=0, datal=1024, cccid=4 00:32:35.220 [2024-07-22 10:48:40.785293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x90a840) on tqpair(0x8b0560): expected_datao=0, payload_size=1024 00:32:35.220 [2024-07-22 10:48:40.785297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785304] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785307] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.785319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.785322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.785326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a9c0) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.829404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.829414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.829417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a840) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.829431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.829442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.220 [2024-07-22 10:48:40.829457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a840, cid 4, qid 0 00:32:35.220 [2024-07-22 10:48:40.829557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.220 [2024-07-22 10:48:40.829563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.220 [2024-07-22 10:48:40.829567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829570] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0560): datao=0, datal=3072, cccid=4 00:32:35.220 [2024-07-22 10:48:40.829575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x90a840) on tqpair(0x8b0560): expected_datao=0, payload_size=3072 00:32:35.220 [2024-07-22 10:48:40.829579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829586] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829589] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.220 [2024-07-22 10:48:40.829695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.220 [2024-07-22 10:48:40.829699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a840) on tqpair=0x8b0560 00:32:35.220 [2024-07-22 10:48:40.829710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.220 [2024-07-22 10:48:40.829714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0560) 00:32:35.220 [2024-07-22 10:48:40.829720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.220 [2024-07-22 10:48:40.829734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a840, cid 4, qid 0 00:32:35.220 [2024-07-22 10:48:40.829835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.220 [2024-07-22 10:48:40.829841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.221 [2024-07-22 10:48:40.829844] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.221 [2024-07-22 10:48:40.829848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0560): datao=0, datal=8, cccid=4 00:32:35.221 [2024-07-22 10:48:40.829852] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x90a840) on tqpair(0x8b0560): expected_datao=0, payload_size=8 00:32:35.221 [2024-07-22 10:48:40.829856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.221 [2024-07-22 10:48:40.829863] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.221 [2024-07-22 10:48:40.829866] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.221 [2024-07-22 10:48:40.871498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.221 [2024-07-22 10:48:40.871510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.221 [2024-07-22 10:48:40.871513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.221 [2024-07-22 10:48:40.871517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a840) on tqpair=0x8b0560 00:32:35.221 ===================================================== 00:32:35.221 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:35.221 ===================================================== 00:32:35.221 Controller Capabilities/Features 00:32:35.221 ================================ 00:32:35.221 Vendor ID: 0000 00:32:35.221 Subsystem Vendor ID: 0000 00:32:35.221 Serial Number: .................... 00:32:35.221 Model Number: ........................................ 00:32:35.221 Firmware Version: 24.09 00:32:35.221 Recommended Arb Burst: 0 00:32:35.221 IEEE OUI Identifier: 00 00 00 00:32:35.221 Multi-path I/O 00:32:35.221 May have multiple subsystem ports: No 00:32:35.221 May have multiple controllers: No 00:32:35.221 Associated with SR-IOV VF: No 00:32:35.221 Max Data Transfer Size: 131072 00:32:35.221 Max Number of Namespaces: 0 00:32:35.221 Max Number of I/O Queues: 1024 00:32:35.221 NVMe Specification Version (VS): 1.3 00:32:35.221 NVMe Specification Version (Identify): 1.3 00:32:35.221 Maximum Queue Entries: 128 00:32:35.221 Contiguous Queues Required: Yes 00:32:35.221 Arbitration Mechanisms Supported 00:32:35.221 Weighted Round Robin: Not Supported 00:32:35.221 Vendor Specific: Not Supported 00:32:35.221 Reset Timeout: 15000 ms 00:32:35.221 Doorbell Stride: 4 bytes 00:32:35.221 NVM Subsystem Reset: Not Supported 00:32:35.221 Command Sets Supported 00:32:35.221 NVM Command Set: Supported 00:32:35.221 Boot Partition: Not Supported 00:32:35.221 Memory Page Size Minimum: 4096 bytes 00:32:35.221 Memory Page Size Maximum: 4096 bytes 00:32:35.221 Persistent Memory Region: Not Supported 00:32:35.221 Optional Asynchronous Events Supported 00:32:35.221 Namespace Attribute Notices: Not Supported 00:32:35.221 Firmware Activation Notices: Not Supported 00:32:35.221 ANA Change Notices: Not Supported 00:32:35.221 PLE Aggregate Log Change Notices: Not Supported 00:32:35.221 LBA Status Info Alert Notices: Not Supported 00:32:35.221 EGE Aggregate Log Change Notices: Not Supported 00:32:35.221 Normal NVM Subsystem Shutdown event: Not Supported 00:32:35.221 Zone Descriptor Change Notices: Not Supported 00:32:35.221 Discovery Log Change Notices: Supported 00:32:35.221 Controller Attributes 00:32:35.221 128-bit Host Identifier: Not Supported 00:32:35.221 Non-Operational Permissive Mode: Not Supported 00:32:35.221 NVM Sets: Not Supported 00:32:35.221 Read Recovery Levels: Not Supported 00:32:35.221 Endurance Groups: Not Supported 00:32:35.221 Predictable Latency Mode: Not Supported 00:32:35.221 Traffic Based Keep ALive: Not Supported 00:32:35.221 Namespace Granularity: Not Supported 00:32:35.221 SQ Associations: Not Supported 00:32:35.221 UUID List: Not Supported 00:32:35.221 Multi-Domain Subsystem: Not Supported 00:32:35.221 Fixed Capacity Management: Not Supported 00:32:35.221 Variable Capacity Management: Not Supported 00:32:35.221 Delete Endurance Group: Not Supported 00:32:35.221 Delete NVM Set: Not Supported 00:32:35.221 Extended LBA Formats Supported: Not Supported 00:32:35.221 Flexible Data Placement Supported: Not Supported 00:32:35.221 00:32:35.221 Controller Memory Buffer Support 00:32:35.221 ================================ 00:32:35.221 Supported: No 00:32:35.221 00:32:35.221 Persistent Memory Region Support 00:32:35.221 ================================ 00:32:35.221 Supported: No 00:32:35.221 00:32:35.221 Admin Command Set Attributes 00:32:35.221 ============================ 00:32:35.221 Security Send/Receive: Not Supported 00:32:35.221 Format NVM: Not Supported 00:32:35.221 Firmware Activate/Download: Not Supported 00:32:35.221 Namespace Management: Not Supported 00:32:35.221 Device Self-Test: Not Supported 00:32:35.221 Directives: Not Supported 00:32:35.221 NVMe-MI: Not Supported 00:32:35.221 Virtualization Management: Not Supported 00:32:35.221 Doorbell Buffer Config: Not Supported 00:32:35.221 Get LBA Status Capability: Not Supported 00:32:35.221 Command & Feature Lockdown Capability: Not Supported 00:32:35.221 Abort Command Limit: 1 00:32:35.221 Async Event Request Limit: 4 00:32:35.221 Number of Firmware Slots: N/A 00:32:35.221 Firmware Slot 1 Read-Only: N/A 00:32:35.221 Firmware Activation Without Reset: N/A 00:32:35.221 Multiple Update Detection Support: N/A 00:32:35.221 Firmware Update Granularity: No Information Provided 00:32:35.221 Per-Namespace SMART Log: No 00:32:35.221 Asymmetric Namespace Access Log Page: Not Supported 00:32:35.221 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:35.221 Command Effects Log Page: Not Supported 00:32:35.221 Get Log Page Extended Data: Supported 00:32:35.221 Telemetry Log Pages: Not Supported 00:32:35.221 Persistent Event Log Pages: Not Supported 00:32:35.221 Supported Log Pages Log Page: May Support 00:32:35.221 Commands Supported & Effects Log Page: Not Supported 00:32:35.221 Feature Identifiers & Effects Log Page:May Support 00:32:35.221 NVMe-MI Commands & Effects Log Page: May Support 00:32:35.221 Data Area 4 for Telemetry Log: Not Supported 00:32:35.221 Error Log Page Entries Supported: 128 00:32:35.221 Keep Alive: Not Supported 00:32:35.221 00:32:35.221 NVM Command Set Attributes 00:32:35.221 ========================== 00:32:35.221 Submission Queue Entry Size 00:32:35.221 Max: 1 00:32:35.221 Min: 1 00:32:35.221 Completion Queue Entry Size 00:32:35.221 Max: 1 00:32:35.221 Min: 1 00:32:35.221 Number of Namespaces: 0 00:32:35.221 Compare Command: Not Supported 00:32:35.221 Write Uncorrectable Command: Not Supported 00:32:35.221 Dataset Management Command: Not Supported 00:32:35.221 Write Zeroes Command: Not Supported 00:32:35.221 Set Features Save Field: Not Supported 00:32:35.221 Reservations: Not Supported 00:32:35.221 Timestamp: Not Supported 00:32:35.221 Copy: Not Supported 00:32:35.221 Volatile Write Cache: Not Present 00:32:35.221 Atomic Write Unit (Normal): 1 00:32:35.221 Atomic Write Unit (PFail): 1 00:32:35.221 Atomic Compare & Write Unit: 1 00:32:35.221 Fused Compare & Write: Supported 00:32:35.221 Scatter-Gather List 00:32:35.221 SGL Command Set: Supported 00:32:35.221 SGL Keyed: Supported 00:32:35.221 SGL Bit Bucket Descriptor: Not Supported 00:32:35.221 SGL Metadata Pointer: Not Supported 00:32:35.221 Oversized SGL: Not Supported 00:32:35.221 SGL Metadata Address: Not Supported 00:32:35.221 SGL Offset: Supported 00:32:35.221 Transport SGL Data Block: Not Supported 00:32:35.221 Replay Protected Memory Block: Not Supported 00:32:35.221 00:32:35.221 Firmware Slot Information 00:32:35.221 ========================= 00:32:35.221 Active slot: 0 00:32:35.221 00:32:35.221 00:32:35.221 Error Log 00:32:35.221 ========= 00:32:35.221 00:32:35.221 Active Namespaces 00:32:35.221 ================= 00:32:35.221 Discovery Log Page 00:32:35.221 ================== 00:32:35.221 Generation Counter: 2 00:32:35.221 Number of Records: 2 00:32:35.221 Record Format: 0 00:32:35.221 00:32:35.221 Discovery Log Entry 0 00:32:35.221 ---------------------- 00:32:35.221 Transport Type: 3 (TCP) 00:32:35.221 Address Family: 1 (IPv4) 00:32:35.221 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:35.221 Entry Flags: 00:32:35.221 Duplicate Returned Information: 1 00:32:35.221 Explicit Persistent Connection Support for Discovery: 1 00:32:35.221 Transport Requirements: 00:32:35.221 Secure Channel: Not Required 00:32:35.221 Port ID: 0 (0x0000) 00:32:35.221 Controller ID: 65535 (0xffff) 00:32:35.221 Admin Max SQ Size: 128 00:32:35.221 Transport Service Identifier: 4420 00:32:35.221 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:35.221 Transport Address: 10.0.0.2 00:32:35.221 Discovery Log Entry 1 00:32:35.221 ---------------------- 00:32:35.221 Transport Type: 3 (TCP) 00:32:35.221 Address Family: 1 (IPv4) 00:32:35.221 Subsystem Type: 2 (NVM Subsystem) 00:32:35.221 Entry Flags: 00:32:35.221 Duplicate Returned Information: 0 00:32:35.221 Explicit Persistent Connection Support for Discovery: 0 00:32:35.221 Transport Requirements: 00:32:35.221 Secure Channel: Not Required 00:32:35.221 Port ID: 0 (0x0000) 00:32:35.221 Controller ID: 65535 (0xffff) 00:32:35.221 Admin Max SQ Size: 128 00:32:35.221 Transport Service Identifier: 4420 00:32:35.221 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:35.221 Transport Address: 10.0.0.2 [2024-07-22 10:48:40.871602] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:35.222 [2024-07-22 10:48:40.871612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a240) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.222 [2024-07-22 10:48:40.871625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a3c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.222 [2024-07-22 10:48:40.871635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a540) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.222 [2024-07-22 10:48:40.871645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.222 [2024-07-22 10:48:40.871660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.871675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.871689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.871783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.871790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.871793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.871818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.871831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.871947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.871954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.871957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.871966] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:35.222 [2024-07-22 10:48:40.871971] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:35.222 [2024-07-22 10:48:40.871980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.871987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.871994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.872900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.872907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.222 [2024-07-22 10:48:40.872914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.222 [2024-07-22 10:48:40.872923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.222 [2024-07-22 10:48:40.872988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.222 [2024-07-22 10:48:40.872994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.222 [2024-07-22 10:48:40.872997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.873001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.222 [2024-07-22 10:48:40.873010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.222 [2024-07-22 10:48:40.873014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.223 [2024-07-22 10:48:40.873024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.223 [2024-07-22 10:48:40.873034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.223 [2024-07-22 10:48:40.873103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.223 [2024-07-22 10:48:40.873109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.223 [2024-07-22 10:48:40.873112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.223 [2024-07-22 10:48:40.873126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.223 [2024-07-22 10:48:40.873140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.223 [2024-07-22 10:48:40.873149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.223 [2024-07-22 10:48:40.873220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.223 [2024-07-22 10:48:40.873228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.223 [2024-07-22 10:48:40.873232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.223 [2024-07-22 10:48:40.873245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.223 [2024-07-22 10:48:40.873259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.223 [2024-07-22 10:48:40.873268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.223 [2024-07-22 10:48:40.873373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.223 [2024-07-22 10:48:40.873379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.223 [2024-07-22 10:48:40.873382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.873386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.223 [2024-07-22 10:48:40.877399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.877405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.877409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0560) 00:32:35.223 [2024-07-22 10:48:40.877415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.223 [2024-07-22 10:48:40.877426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x90a6c0, cid 3, qid 0 00:32:35.223 [2024-07-22 10:48:40.877509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.223 [2024-07-22 10:48:40.877515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.223 [2024-07-22 10:48:40.877519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.223 [2024-07-22 10:48:40.877523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x90a6c0) on tqpair=0x8b0560 00:32:35.223 [2024-07-22 10:48:40.877530] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:32:35.223 00:32:35.223 10:48:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:35.223 [2024-07-22 10:48:40.914259] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:35.223 [2024-07-22 10:48:40.914302] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151062 ] 00:32:35.486 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.486 [2024-07-22 10:48:40.946942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:35.486 [2024-07-22 10:48:40.946987] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:35.486 [2024-07-22 10:48:40.946992] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:35.486 [2024-07-22 10:48:40.947005] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:35.486 [2024-07-22 10:48:40.947011] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:35.486 [2024-07-22 10:48:40.947430] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:35.486 [2024-07-22 10:48:40.947458] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2223560 0 00:32:35.486 [2024-07-22 10:48:40.961401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:35.486 [2024-07-22 10:48:40.961414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:35.486 [2024-07-22 10:48:40.961418] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:35.486 [2024-07-22 10:48:40.961422] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:35.486 [2024-07-22 10:48:40.961455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.961460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.961464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.961476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:35.486 [2024-07-22 10:48:40.961493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.969406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.969415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.969418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.969434] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:35.486 [2024-07-22 10:48:40.969440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:35.486 [2024-07-22 10:48:40.969445] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:35.486 [2024-07-22 10:48:40.969459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.969474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.486 [2024-07-22 10:48:40.969486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.969700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.969706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.969710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.969721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:35.486 [2024-07-22 10:48:40.969728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:35.486 [2024-07-22 10:48:40.969735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.969742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.969749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.486 [2024-07-22 10:48:40.969759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.969999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.970005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.970012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.970021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:35.486 [2024-07-22 10:48:40.970028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.970049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.486 [2024-07-22 10:48:40.970059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.970249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.970255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.970259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.970267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.970290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.486 [2024-07-22 10:48:40.970300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.970510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.970516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.970520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.970528] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:35.486 [2024-07-22 10:48:40.970533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970645] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:35.486 [2024-07-22 10:48:40.970649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.486 [2024-07-22 10:48:40.970670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.486 [2024-07-22 10:48:40.970680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.486 [2024-07-22 10:48:40.970904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.486 [2024-07-22 10:48:40.970911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.486 [2024-07-22 10:48:40.970914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.486 [2024-07-22 10:48:40.970922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:35.486 [2024-07-22 10:48:40.970931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.486 [2024-07-22 10:48:40.970935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.970939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:40.970945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.487 [2024-07-22 10:48:40.970955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.487 [2024-07-22 10:48:40.971155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:40.971161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:40.971164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.971168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:40.971172] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:35.487 [2024-07-22 10:48:40.971177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:40.971184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:35.487 [2024-07-22 10:48:40.971192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:40.971199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.971203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:40.971210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.487 [2024-07-22 10:48:40.971220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.487 [2024-07-22 10:48:40.971458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.487 [2024-07-22 10:48:40.971465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.487 [2024-07-22 10:48:40.971469] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.971472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=4096, cccid=0 00:32:35.487 [2024-07-22 10:48:40.971477] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d240) on tqpair(0x2223560): expected_datao=0, payload_size=4096 00:32:35.487 [2024-07-22 10:48:40.971481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.971523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:40.971527] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:41.016411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:41.016415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:41.016429] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:35.487 [2024-07-22 10:48:41.016437] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:35.487 [2024-07-22 10:48:41.016441] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:35.487 [2024-07-22 10:48:41.016445] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:35.487 [2024-07-22 10:48:41.016450] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:35.487 [2024-07-22 10:48:41.016454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.016462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.016469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.487 [2024-07-22 10:48:41.016495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.487 [2024-07-22 10:48:41.016729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:41.016736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:41.016739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:41.016750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.487 [2024-07-22 10:48:41.016769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.487 [2024-07-22 10:48:41.016788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.487 [2024-07-22 10:48:41.016807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.487 [2024-07-22 10:48:41.016824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.016834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.016843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.016846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.016853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.487 [2024-07-22 10:48:41.016865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d240, cid 0, qid 0 00:32:35.487 [2024-07-22 10:48:41.016870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d3c0, cid 1, qid 0 00:32:35.487 [2024-07-22 10:48:41.016875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d540, cid 2, qid 0 00:32:35.487 [2024-07-22 10:48:41.016879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.487 [2024-07-22 10:48:41.016884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.487 [2024-07-22 10:48:41.017083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:41.017089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:41.017092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:41.017101] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:35.487 [2024-07-22 10:48:41.017106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.017113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.017120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.017126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.017139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.487 [2024-07-22 10:48:41.017149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.487 [2024-07-22 10:48:41.017356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:41.017362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:41.017366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:41.017438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.017448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:35.487 [2024-07-22 10:48:41.017455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.487 [2024-07-22 10:48:41.017465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.487 [2024-07-22 10:48:41.017475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.487 [2024-07-22 10:48:41.017682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.487 [2024-07-22 10:48:41.017688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.487 [2024-07-22 10:48:41.017692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017695] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=4096, cccid=4 00:32:35.487 [2024-07-22 10:48:41.017700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d840) on tqpair(0x2223560): expected_datao=0, payload_size=4096 00:32:35.487 [2024-07-22 10:48:41.017704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017711] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.487 [2024-07-22 10:48:41.017918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.487 [2024-07-22 10:48:41.017922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.487 [2024-07-22 10:48:41.017926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.487 [2024-07-22 10:48:41.017934] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:35.488 [2024-07-22 10:48:41.017947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.017956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.017963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.017967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.017973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.017983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.488 [2024-07-22 10:48:41.018196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.488 [2024-07-22 10:48:41.018203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.488 [2024-07-22 10:48:41.018206] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018210] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=4096, cccid=4 00:32:35.488 [2024-07-22 10:48:41.018214] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d840) on tqpair(0x2223560): expected_datao=0, payload_size=4096 00:32:35.488 [2024-07-22 10:48:41.018218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018228] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.018424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.018427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.018445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.018473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.018484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.488 [2024-07-22 10:48:41.018690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.488 [2024-07-22 10:48:41.018696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.488 [2024-07-22 10:48:41.018700] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018703] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=4096, cccid=4 00:32:35.488 [2024-07-22 10:48:41.018708] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d840) on tqpair(0x2223560): expected_datao=0, payload_size=4096 00:32:35.488 [2024-07-22 10:48:41.018712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018718] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018722] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.018925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.018929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.018932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.018939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018979] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:35.488 [2024-07-22 10:48:41.018983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:35.488 [2024-07-22 10:48:41.018988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:35.488 [2024-07-22 10:48:41.019002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.019012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.019018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.019031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.488 [2024-07-22 10:48:41.019044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.488 [2024-07-22 10:48:41.019049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d9c0, cid 5, qid 0 00:32:35.488 [2024-07-22 10:48:41.019271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.019278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.019281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.019292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.019297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.019301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d9c0) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.019313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.019323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.019332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d9c0, cid 5, qid 0 00:32:35.488 [2024-07-22 10:48:41.019523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.019530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.019533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d9c0) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.019546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.019556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.019565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d9c0, cid 5, qid 0 00:32:35.488 [2024-07-22 10:48:41.019774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.019781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.019784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d9c0) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.019797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.019806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.019815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d9c0, cid 5, qid 0 00:32:35.488 [2024-07-22 10:48:41.019975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.488 [2024-07-22 10:48:41.019982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.488 [2024-07-22 10:48:41.019985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.019989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d9c0) on tqpair=0x2223560 00:32:35.488 [2024-07-22 10:48:41.020002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.020006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.020012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.020021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.020024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.020031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.020038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.020041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.020047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.020054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.488 [2024-07-22 10:48:41.020058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2223560) 00:32:35.488 [2024-07-22 10:48:41.020064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.488 [2024-07-22 10:48:41.020075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d9c0, cid 5, qid 0 00:32:35.488 [2024-07-22 10:48:41.020079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d840, cid 4, qid 0 00:32:35.488 [2024-07-22 10:48:41.020084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227db40, cid 6, qid 0 00:32:35.488 [2024-07-22 10:48:41.020089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227dcc0, cid 7, qid 0 00:32:35.488 [2024-07-22 10:48:41.020323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.488 [2024-07-22 10:48:41.020330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.488 [2024-07-22 10:48:41.020334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.020338] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=8192, cccid=5 00:32:35.489 [2024-07-22 10:48:41.020343] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d9c0) on tqpair(0x2223560): expected_datao=0, payload_size=8192 00:32:35.489 [2024-07-22 10:48:41.020347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024405] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.489 [2024-07-22 10:48:41.024423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.489 [2024-07-22 10:48:41.024427] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024430] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=512, cccid=4 00:32:35.489 [2024-07-22 10:48:41.024434] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227d840) on tqpair(0x2223560): expected_datao=0, payload_size=512 00:32:35.489 [2024-07-22 10:48:41.024439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024445] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024448] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.489 [2024-07-22 10:48:41.024460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.489 [2024-07-22 10:48:41.024463] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024466] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=512, cccid=6 00:32:35.489 [2024-07-22 10:48:41.024470] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227db40) on tqpair(0x2223560): expected_datao=0, payload_size=512 00:32:35.489 [2024-07-22 10:48:41.024475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024483] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024487] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.489 [2024-07-22 10:48:41.024499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.489 [2024-07-22 10:48:41.024503] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024507] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2223560): datao=0, datal=4096, cccid=7 00:32:35.489 [2024-07-22 10:48:41.024512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x227dcc0) on tqpair(0x2223560): expected_datao=0, payload_size=4096 00:32:35.489 [2024-07-22 10:48:41.024518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024526] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024531] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.489 [2024-07-22 10:48:41.024544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.489 [2024-07-22 10:48:41.024549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d9c0) on tqpair=0x2223560 00:32:35.489 [2024-07-22 10:48:41.024566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.489 [2024-07-22 10:48:41.024572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.489 [2024-07-22 10:48:41.024575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d840) on tqpair=0x2223560 00:32:35.489 [2024-07-22 10:48:41.024588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.489 [2024-07-22 10:48:41.024594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.489 [2024-07-22 10:48:41.024597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227db40) on tqpair=0x2223560 00:32:35.489 [2024-07-22 10:48:41.024608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.489 [2024-07-22 10:48:41.024613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.489 [2024-07-22 10:48:41.024617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.489 [2024-07-22 10:48:41.024620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227dcc0) on tqpair=0x2223560 00:32:35.489 ===================================================== 00:32:35.489 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.489 ===================================================== 00:32:35.489 Controller Capabilities/Features 00:32:35.489 ================================ 00:32:35.489 Vendor ID: 8086 00:32:35.489 Subsystem Vendor ID: 8086 00:32:35.489 Serial Number: SPDK00000000000001 00:32:35.489 Model Number: SPDK bdev Controller 00:32:35.489 Firmware Version: 24.09 00:32:35.489 Recommended Arb Burst: 6 00:32:35.489 IEEE OUI Identifier: e4 d2 5c 00:32:35.489 Multi-path I/O 00:32:35.489 May have multiple subsystem ports: Yes 00:32:35.489 May have multiple controllers: Yes 00:32:35.489 Associated with SR-IOV VF: No 00:32:35.489 Max Data Transfer Size: 131072 00:32:35.489 Max Number of Namespaces: 32 00:32:35.489 Max Number of I/O Queues: 127 00:32:35.489 NVMe Specification Version (VS): 1.3 00:32:35.489 NVMe Specification Version (Identify): 1.3 00:32:35.489 Maximum Queue Entries: 128 00:32:35.489 Contiguous Queues Required: Yes 00:32:35.489 Arbitration Mechanisms Supported 00:32:35.489 Weighted Round Robin: Not Supported 00:32:35.489 Vendor Specific: Not Supported 00:32:35.489 Reset Timeout: 15000 ms 00:32:35.489 Doorbell Stride: 4 bytes 00:32:35.489 NVM Subsystem Reset: Not Supported 00:32:35.489 Command Sets Supported 00:32:35.489 NVM Command Set: Supported 00:32:35.489 Boot Partition: Not Supported 00:32:35.489 Memory Page Size Minimum: 4096 bytes 00:32:35.489 Memory Page Size Maximum: 4096 bytes 00:32:35.489 Persistent Memory Region: Not Supported 00:32:35.489 Optional Asynchronous Events Supported 00:32:35.489 Namespace Attribute Notices: Supported 00:32:35.489 Firmware Activation Notices: Not Supported 00:32:35.489 ANA Change Notices: Not Supported 00:32:35.489 PLE Aggregate Log Change Notices: Not Supported 00:32:35.489 LBA Status Info Alert Notices: Not Supported 00:32:35.489 EGE Aggregate Log Change Notices: Not Supported 00:32:35.489 Normal NVM Subsystem Shutdown event: Not Supported 00:32:35.489 Zone Descriptor Change Notices: Not Supported 00:32:35.489 Discovery Log Change Notices: Not Supported 00:32:35.489 Controller Attributes 00:32:35.489 128-bit Host Identifier: Supported 00:32:35.489 Non-Operational Permissive Mode: Not Supported 00:32:35.489 NVM Sets: Not Supported 00:32:35.489 Read Recovery Levels: Not Supported 00:32:35.489 Endurance Groups: Not Supported 00:32:35.489 Predictable Latency Mode: Not Supported 00:32:35.489 Traffic Based Keep ALive: Not Supported 00:32:35.489 Namespace Granularity: Not Supported 00:32:35.489 SQ Associations: Not Supported 00:32:35.489 UUID List: Not Supported 00:32:35.489 Multi-Domain Subsystem: Not Supported 00:32:35.489 Fixed Capacity Management: Not Supported 00:32:35.489 Variable Capacity Management: Not Supported 00:32:35.489 Delete Endurance Group: Not Supported 00:32:35.489 Delete NVM Set: Not Supported 00:32:35.489 Extended LBA Formats Supported: Not Supported 00:32:35.489 Flexible Data Placement Supported: Not Supported 00:32:35.489 00:32:35.489 Controller Memory Buffer Support 00:32:35.489 ================================ 00:32:35.489 Supported: No 00:32:35.489 00:32:35.489 Persistent Memory Region Support 00:32:35.489 ================================ 00:32:35.489 Supported: No 00:32:35.489 00:32:35.489 Admin Command Set Attributes 00:32:35.489 ============================ 00:32:35.489 Security Send/Receive: Not Supported 00:32:35.489 Format NVM: Not Supported 00:32:35.489 Firmware Activate/Download: Not Supported 00:32:35.489 Namespace Management: Not Supported 00:32:35.489 Device Self-Test: Not Supported 00:32:35.489 Directives: Not Supported 00:32:35.489 NVMe-MI: Not Supported 00:32:35.489 Virtualization Management: Not Supported 00:32:35.489 Doorbell Buffer Config: Not Supported 00:32:35.489 Get LBA Status Capability: Not Supported 00:32:35.489 Command & Feature Lockdown Capability: Not Supported 00:32:35.489 Abort Command Limit: 4 00:32:35.489 Async Event Request Limit: 4 00:32:35.489 Number of Firmware Slots: N/A 00:32:35.489 Firmware Slot 1 Read-Only: N/A 00:32:35.489 Firmware Activation Without Reset: N/A 00:32:35.489 Multiple Update Detection Support: N/A 00:32:35.489 Firmware Update Granularity: No Information Provided 00:32:35.489 Per-Namespace SMART Log: No 00:32:35.489 Asymmetric Namespace Access Log Page: Not Supported 00:32:35.489 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:35.489 Command Effects Log Page: Supported 00:32:35.489 Get Log Page Extended Data: Supported 00:32:35.489 Telemetry Log Pages: Not Supported 00:32:35.489 Persistent Event Log Pages: Not Supported 00:32:35.489 Supported Log Pages Log Page: May Support 00:32:35.489 Commands Supported & Effects Log Page: Not Supported 00:32:35.489 Feature Identifiers & Effects Log Page:May Support 00:32:35.489 NVMe-MI Commands & Effects Log Page: May Support 00:32:35.489 Data Area 4 for Telemetry Log: Not Supported 00:32:35.489 Error Log Page Entries Supported: 128 00:32:35.489 Keep Alive: Supported 00:32:35.489 Keep Alive Granularity: 10000 ms 00:32:35.489 00:32:35.489 NVM Command Set Attributes 00:32:35.489 ========================== 00:32:35.489 Submission Queue Entry Size 00:32:35.489 Max: 64 00:32:35.489 Min: 64 00:32:35.489 Completion Queue Entry Size 00:32:35.489 Max: 16 00:32:35.489 Min: 16 00:32:35.490 Number of Namespaces: 32 00:32:35.490 Compare Command: Supported 00:32:35.490 Write Uncorrectable Command: Not Supported 00:32:35.490 Dataset Management Command: Supported 00:32:35.490 Write Zeroes Command: Supported 00:32:35.490 Set Features Save Field: Not Supported 00:32:35.490 Reservations: Supported 00:32:35.490 Timestamp: Not Supported 00:32:35.490 Copy: Supported 00:32:35.490 Volatile Write Cache: Present 00:32:35.490 Atomic Write Unit (Normal): 1 00:32:35.490 Atomic Write Unit (PFail): 1 00:32:35.490 Atomic Compare & Write Unit: 1 00:32:35.490 Fused Compare & Write: Supported 00:32:35.490 Scatter-Gather List 00:32:35.490 SGL Command Set: Supported 00:32:35.490 SGL Keyed: Supported 00:32:35.490 SGL Bit Bucket Descriptor: Not Supported 00:32:35.490 SGL Metadata Pointer: Not Supported 00:32:35.490 Oversized SGL: Not Supported 00:32:35.490 SGL Metadata Address: Not Supported 00:32:35.490 SGL Offset: Supported 00:32:35.490 Transport SGL Data Block: Not Supported 00:32:35.490 Replay Protected Memory Block: Not Supported 00:32:35.490 00:32:35.490 Firmware Slot Information 00:32:35.490 ========================= 00:32:35.490 Active slot: 1 00:32:35.490 Slot 1 Firmware Revision: 24.09 00:32:35.490 00:32:35.490 00:32:35.490 Commands Supported and Effects 00:32:35.490 ============================== 00:32:35.490 Admin Commands 00:32:35.490 -------------- 00:32:35.490 Get Log Page (02h): Supported 00:32:35.490 Identify (06h): Supported 00:32:35.490 Abort (08h): Supported 00:32:35.490 Set Features (09h): Supported 00:32:35.490 Get Features (0Ah): Supported 00:32:35.490 Asynchronous Event Request (0Ch): Supported 00:32:35.490 Keep Alive (18h): Supported 00:32:35.490 I/O Commands 00:32:35.490 ------------ 00:32:35.490 Flush (00h): Supported LBA-Change 00:32:35.490 Write (01h): Supported LBA-Change 00:32:35.490 Read (02h): Supported 00:32:35.490 Compare (05h): Supported 00:32:35.490 Write Zeroes (08h): Supported LBA-Change 00:32:35.490 Dataset Management (09h): Supported LBA-Change 00:32:35.490 Copy (19h): Supported LBA-Change 00:32:35.490 00:32:35.490 Error Log 00:32:35.490 ========= 00:32:35.490 00:32:35.490 Arbitration 00:32:35.490 =========== 00:32:35.490 Arbitration Burst: 1 00:32:35.490 00:32:35.490 Power Management 00:32:35.490 ================ 00:32:35.490 Number of Power States: 1 00:32:35.490 Current Power State: Power State #0 00:32:35.490 Power State #0: 00:32:35.490 Max Power: 0.00 W 00:32:35.490 Non-Operational State: Operational 00:32:35.490 Entry Latency: Not Reported 00:32:35.490 Exit Latency: Not Reported 00:32:35.490 Relative Read Throughput: 0 00:32:35.490 Relative Read Latency: 0 00:32:35.490 Relative Write Throughput: 0 00:32:35.490 Relative Write Latency: 0 00:32:35.490 Idle Power: Not Reported 00:32:35.490 Active Power: Not Reported 00:32:35.490 Non-Operational Permissive Mode: Not Supported 00:32:35.490 00:32:35.490 Health Information 00:32:35.490 ================== 00:32:35.490 Critical Warnings: 00:32:35.490 Available Spare Space: OK 00:32:35.490 Temperature: OK 00:32:35.490 Device Reliability: OK 00:32:35.490 Read Only: No 00:32:35.490 Volatile Memory Backup: OK 00:32:35.490 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:35.490 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:35.490 Available Spare: 0% 00:32:35.490 Available Spare Threshold: 0% 00:32:35.490 Life Percentage Used:[2024-07-22 10:48:41.024717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.024722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2223560) 00:32:35.490 [2024-07-22 10:48:41.024729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.490 [2024-07-22 10:48:41.024742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227dcc0, cid 7, qid 0 00:32:35.490 [2024-07-22 10:48:41.024965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.490 [2024-07-22 10:48:41.024971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.490 [2024-07-22 10:48:41.024975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.024978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227dcc0) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025009] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:35.490 [2024-07-22 10:48:41.025018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d240) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.490 [2024-07-22 10:48:41.025029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d3c0) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.490 [2024-07-22 10:48:41.025041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d540) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.490 [2024-07-22 10:48:41.025050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.490 [2024-07-22 10:48:41.025063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.490 [2024-07-22 10:48:41.025077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.490 [2024-07-22 10:48:41.025088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.490 [2024-07-22 10:48:41.025315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.490 [2024-07-22 10:48:41.025321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.490 [2024-07-22 10:48:41.025325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.490 [2024-07-22 10:48:41.025349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.490 [2024-07-22 10:48:41.025361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.490 [2024-07-22 10:48:41.025617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.490 [2024-07-22 10:48:41.025624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.490 [2024-07-22 10:48:41.025627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.490 [2024-07-22 10:48:41.025636] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:35.490 [2024-07-22 10:48:41.025641] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:35.490 [2024-07-22 10:48:41.025650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.490 [2024-07-22 10:48:41.025664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.490 [2024-07-22 10:48:41.025673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.490 [2024-07-22 10:48:41.025869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.490 [2024-07-22 10:48:41.025876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.490 [2024-07-22 10:48:41.025879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.490 [2024-07-22 10:48:41.025883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.025894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.025898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.025902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.025908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.025918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.026172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.026179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.026182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.026195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.026209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.026219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.026425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.026432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.026435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.026448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.026462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.026471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.026676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.026683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.026687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.026700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.026715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.026724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.026921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.026927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.026932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.026947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.026957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.026963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.026975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.027181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.027187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.027191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.027203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.027217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.027227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.027482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.027490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.027493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.027507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.027520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.027530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.027735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.027742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.027745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.027758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.027772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.027781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.027953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.027959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.027963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.027976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.027983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.027992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.028001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.028237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.028243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.028247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.028250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.028260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.028263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.028267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2223560) 00:32:35.491 [2024-07-22 10:48:41.028273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.491 [2024-07-22 10:48:41.028283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x227d6c0, cid 3, qid 0 00:32:35.491 [2024-07-22 10:48:41.032402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.491 [2024-07-22 10:48:41.032411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.491 [2024-07-22 10:48:41.032414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.491 [2024-07-22 10:48:41.032418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x227d6c0) on tqpair=0x2223560 00:32:35.491 [2024-07-22 10:48:41.032425] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:32:35.491 0% 00:32:35.491 Data Units Read: 0 00:32:35.491 Data Units Written: 0 00:32:35.491 Host Read Commands: 0 00:32:35.491 Host Write Commands: 0 00:32:35.491 Controller Busy Time: 0 minutes 00:32:35.491 Power Cycles: 0 00:32:35.491 Power On Hours: 0 hours 00:32:35.491 Unsafe Shutdowns: 0 00:32:35.491 Unrecoverable Media Errors: 0 00:32:35.491 Lifetime Error Log Entries: 0 00:32:35.491 Warning Temperature Time: 0 minutes 00:32:35.491 Critical Temperature Time: 0 minutes 00:32:35.491 00:32:35.491 Number of Queues 00:32:35.491 ================ 00:32:35.491 Number of I/O Submission Queues: 127 00:32:35.491 Number of I/O Completion Queues: 127 00:32:35.491 00:32:35.491 Active Namespaces 00:32:35.491 ================= 00:32:35.491 Namespace ID:1 00:32:35.491 Error Recovery Timeout: Unlimited 00:32:35.491 Command Set Identifier: NVM (00h) 00:32:35.491 Deallocate: Supported 00:32:35.491 Deallocated/Unwritten Error: Not Supported 00:32:35.491 Deallocated Read Value: Unknown 00:32:35.491 Deallocate in Write Zeroes: Not Supported 00:32:35.491 Deallocated Guard Field: 0xFFFF 00:32:35.491 Flush: Supported 00:32:35.491 Reservation: Supported 00:32:35.491 Namespace Sharing Capabilities: Multiple Controllers 00:32:35.491 Size (in LBAs): 131072 (0GiB) 00:32:35.491 Capacity (in LBAs): 131072 (0GiB) 00:32:35.491 Utilization (in LBAs): 131072 (0GiB) 00:32:35.491 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:35.491 EUI64: ABCDEF0123456789 00:32:35.491 UUID: f4df5587-6b03-4345-94f6-20c7d83d9139 00:32:35.491 Thin Provisioning: Not Supported 00:32:35.492 Per-NS Atomic Units: Yes 00:32:35.492 Atomic Boundary Size (Normal): 0 00:32:35.492 Atomic Boundary Size (PFail): 0 00:32:35.492 Atomic Boundary Offset: 0 00:32:35.492 Maximum Single Source Range Length: 65535 00:32:35.492 Maximum Copy Length: 65535 00:32:35.492 Maximum Source Range Count: 1 00:32:35.492 NGUID/EUI64 Never Reused: No 00:32:35.492 Namespace Write Protected: No 00:32:35.492 Number of LBA Formats: 1 00:32:35.492 Current LBA Format: LBA Format #00 00:32:35.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:35.492 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:35.492 rmmod nvme_tcp 00:32:35.492 rmmod nvme_fabrics 00:32:35.492 rmmod nvme_keyring 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2150878 ']' 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2150878 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2150878 ']' 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2150878 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2150878 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2150878' 00:32:35.492 killing process with pid 2150878 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2150878 00:32:35.492 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2150878 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:35.760 10:48:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.299 10:48:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.299 00:32:38.299 real 0m11.964s 00:32:38.299 user 0m8.154s 00:32:38.299 sys 0m6.415s 00:32:38.299 10:48:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.299 10:48:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:38.299 ************************************ 00:32:38.299 END TEST nvmf_identify 00:32:38.299 ************************************ 00:32:38.299 10:48:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:38.299 10:48:43 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:38.299 10:48:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.299 10:48:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.299 10:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.299 ************************************ 00:32:38.299 START TEST nvmf_perf 00:32:38.299 ************************************ 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:38.299 * Looking for test storage... 00:32:38.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.299 10:48:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.300 10:48:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:46.442 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:46.442 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:46.442 Found net devices under 0000:31:00.0: cvl_0_0 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:46.442 Found net devices under 0000:31:00.1: cvl_0_1 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:32:46.442 00:32:46.442 --- 10.0.0.2 ping statistics --- 00:32:46.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.442 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:32:46.442 00:32:46.442 --- 10.0.0.1 ping statistics --- 00:32:46.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.442 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2155746 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2155746 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:46.442 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2155746 ']' 00:32:46.443 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.443 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.443 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.443 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.443 10:48:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.443 [2024-07-22 10:48:51.717009] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:32:46.443 [2024-07-22 10:48:51.717076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.443 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.443 [2024-07-22 10:48:51.797319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.443 [2024-07-22 10:48:51.836989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.443 [2024-07-22 10:48:51.837033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.443 [2024-07-22 10:48:51.837040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.443 [2024-07-22 10:48:51.837047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.443 [2024-07-22 10:48:51.837052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.443 [2024-07-22 10:48:51.837107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.443 [2024-07-22 10:48:51.837195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.443 [2024-07-22 10:48:51.837352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.443 [2024-07-22 10:48:51.837352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:47.015 10:48:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:47.586 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:47.586 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:47.586 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:47.586 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.846 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:47.846 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:47.846 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:47.846 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:47.846 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:47.846 [2024-07-22 10:48:53.521501] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.107 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.107 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:48.107 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.367 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:48.367 10:48:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:48.367 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.628 [2024-07-22 10:48:54.199936] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.628 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.888 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:48.888 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:48.888 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:48.888 10:48:54 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:50.268 Initializing NVMe Controllers 00:32:50.268 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:50.268 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:50.268 Initialization complete. Launching workers. 00:32:50.268 ======================================================== 00:32:50.268 Latency(us) 00:32:50.268 Device Information : IOPS MiB/s Average min max 00:32:50.268 PCIE (0000:65:00.0) NSID 1 from core 0: 79041.72 308.76 404.45 13.36 6198.58 00:32:50.268 ======================================================== 00:32:50.268 Total : 79041.72 308.76 404.45 13.36 6198.58 00:32:50.268 00:32:50.268 10:48:55 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.268 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.207 Initializing NVMe Controllers 00:32:51.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:51.207 Initialization complete. Launching workers. 00:32:51.207 ======================================================== 00:32:51.207 Latency(us) 00:32:51.207 Device Information : IOPS MiB/s Average min max 00:32:51.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.63 0.40 9843.10 277.33 45891.44 00:32:51.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.84 0.18 22479.78 5995.69 55868.71 00:32:51.207 ======================================================== 00:32:51.207 Total : 146.46 0.57 13711.47 277.33 55868.71 00:32:51.207 00:32:51.207 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.207 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.604 Initializing NVMe Controllers 00:32:52.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:52.604 Initialization complete. Launching workers. 00:32:52.604 ======================================================== 00:32:52.604 Latency(us) 00:32:52.604 Device Information : IOPS MiB/s Average min max 00:32:52.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10421.98 40.71 3078.35 522.39 7021.23 00:32:52.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3823.99 14.94 8405.79 6175.73 15871.19 00:32:52.604 ======================================================== 00:32:52.604 Total : 14245.97 55.65 4508.37 522.39 15871.19 00:32:52.604 00:32:52.604 10:48:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:52.604 10:48:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:52.604 10:48:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.604 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.147 Initializing NVMe Controllers 00:32:55.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:55.147 Controller IO queue size 128, less than required. 00:32:55.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:55.147 Controller IO queue size 128, less than required. 00:32:55.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:55.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:55.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:55.147 Initialization complete. Launching workers. 00:32:55.147 ======================================================== 00:32:55.147 Latency(us) 00:32:55.147 Device Information : IOPS MiB/s Average min max 00:32:55.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1442.38 360.59 90423.92 50314.57 145255.02 00:32:55.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.35 143.59 229798.40 93487.29 377114.60 00:32:55.147 ======================================================== 00:32:55.147 Total : 2016.73 504.18 130117.00 50314.57 377114.60 00:32:55.147 00:32:55.147 10:49:00 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:55.147 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.408 No valid NVMe controllers or AIO or URING devices found 00:32:55.408 Initializing NVMe Controllers 00:32:55.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:55.408 Controller IO queue size 128, less than required. 00:32:55.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:55.408 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:55.408 Controller IO queue size 128, less than required. 00:32:55.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:55.408 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:55.408 WARNING: Some requested NVMe devices were skipped 00:32:55.408 10:49:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:55.408 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.952 Initializing NVMe Controllers 00:32:57.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.953 Controller IO queue size 128, less than required. 00:32:57.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:57.953 Controller IO queue size 128, less than required. 00:32:57.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:57.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:57.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:57.953 Initialization complete. Launching workers. 00:32:57.953 00:32:57.953 ==================== 00:32:57.953 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:57.953 TCP transport: 00:32:57.953 polls: 21676 00:32:57.953 idle_polls: 9520 00:32:57.953 sock_completions: 12156 00:32:57.953 nvme_completions: 5895 00:32:57.953 submitted_requests: 8852 00:32:57.953 queued_requests: 1 00:32:57.953 00:32:57.953 ==================== 00:32:57.953 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:57.953 TCP transport: 00:32:57.953 polls: 24969 00:32:57.953 idle_polls: 12628 00:32:57.953 sock_completions: 12341 00:32:57.953 nvme_completions: 6125 00:32:57.953 submitted_requests: 9258 00:32:57.953 queued_requests: 1 00:32:57.953 ======================================================== 00:32:57.953 Latency(us) 00:32:57.953 Device Information : IOPS MiB/s Average min max 00:32:57.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1470.99 367.75 88140.35 47898.14 136072.15 00:32:57.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1528.39 382.10 84673.13 44934.62 129777.58 00:32:57.953 ======================================================== 00:32:57.953 Total : 2999.38 749.84 86373.56 44934.62 136072.15 00:32:57.953 00:32:57.953 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:57.953 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.213 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:58.213 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:58.213 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3acaabbc-6876-4e09-b821-e77fe66bfcb7 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3acaabbc-6876-4e09-b821-e77fe66bfcb7 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=3acaabbc-6876-4e09-b821-e77fe66bfcb7 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:59.153 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.413 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:59.413 { 00:32:59.413 "uuid": "3acaabbc-6876-4e09-b821-e77fe66bfcb7", 00:32:59.413 "name": "lvs_0", 00:32:59.413 "base_bdev": "Nvme0n1", 00:32:59.413 "total_data_clusters": 457407, 00:32:59.413 "free_clusters": 457407, 00:32:59.413 "block_size": 512, 00:32:59.413 "cluster_size": 4194304 00:32:59.413 } 00:32:59.413 ]' 00:32:59.413 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3acaabbc-6876-4e09-b821-e77fe66bfcb7") .free_clusters' 00:32:59.413 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:32:59.413 10:49:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3acaabbc-6876-4e09-b821-e77fe66bfcb7") .cluster_size' 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:32:59.413 1829628 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:59.413 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3acaabbc-6876-4e09-b821-e77fe66bfcb7 lbd_0 20480 00:32:59.685 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=101807c4-3580-44d5-81ec-942f497c3481 00:32:59.685 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 101807c4-3580-44d5-81ec-942f497c3481 lvs_n_0 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2af95f22-667b-4a5e-8264-48101b34d248 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2af95f22-667b-4a5e-8264-48101b34d248 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2af95f22-667b-4a5e-8264-48101b34d248 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:01.596 { 00:33:01.596 "uuid": "3acaabbc-6876-4e09-b821-e77fe66bfcb7", 00:33:01.596 "name": "lvs_0", 00:33:01.596 "base_bdev": "Nvme0n1", 00:33:01.596 "total_data_clusters": 457407, 00:33:01.596 "free_clusters": 452287, 00:33:01.596 "block_size": 512, 00:33:01.596 "cluster_size": 4194304 00:33:01.596 }, 00:33:01.596 { 00:33:01.596 "uuid": "2af95f22-667b-4a5e-8264-48101b34d248", 00:33:01.596 "name": "lvs_n_0", 00:33:01.596 "base_bdev": "101807c4-3580-44d5-81ec-942f497c3481", 00:33:01.596 "total_data_clusters": 5114, 00:33:01.596 "free_clusters": 5114, 00:33:01.596 "block_size": 512, 00:33:01.596 "cluster_size": 4194304 00:33:01.596 } 00:33:01.596 ]' 00:33:01.596 10:49:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2af95f22-667b-4a5e-8264-48101b34d248") .free_clusters' 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2af95f22-667b-4a5e-8264-48101b34d248") .cluster_size' 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:33:01.596 20456 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2af95f22-667b-4a5e-8264-48101b34d248 lbd_nest_0 20456 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=526f8f59-dc01-4100-9cb1-d79f1712b1e4 00:33:01.596 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.857 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:01.857 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 526f8f59-dc01-4100-9cb1-d79f1712b1e4 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:02.117 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:02.117 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.436 Initializing NVMe Controllers 00:33:14.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:14.436 Initialization complete. Launching workers. 00:33:14.436 ======================================================== 00:33:14.436 Latency(us) 00:33:14.436 Device Information : IOPS MiB/s Average min max 00:33:14.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.80 0.02 20157.87 161.81 48903.10 00:33:14.436 ======================================================== 00:33:14.436 Total : 49.80 0.02 20157.87 161.81 48903.10 00:33:14.436 00:33:14.436 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:14.436 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:14.436 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.422 Initializing NVMe Controllers 00:33:24.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:24.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:24.422 Initialization complete. Launching workers. 00:33:24.422 ======================================================== 00:33:24.422 Latency(us) 00:33:24.422 Device Information : IOPS MiB/s Average min max 00:33:24.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.40 7.92 15779.57 4987.46 55869.00 00:33:24.422 ======================================================== 00:33:24.422 Total : 63.40 7.92 15779.57 4987.46 55869.00 00:33:24.422 00:33:24.422 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:24.422 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:24.422 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:24.422 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.413 Initializing NVMe Controllers 00:33:34.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:34.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:34.413 Initialization complete. Launching workers. 00:33:34.413 ======================================================== 00:33:34.413 Latency(us) 00:33:34.413 Device Information : IOPS MiB/s Average min max 00:33:34.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8657.07 4.23 3696.12 271.63 10345.08 00:33:34.413 ======================================================== 00:33:34.413 Total : 8657.07 4.23 3696.12 271.63 10345.08 00:33:34.413 00:33:34.413 10:49:38 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:34.413 10:49:38 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:34.413 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.406 Initializing NVMe Controllers 00:33:44.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:44.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:44.406 Initialization complete. Launching workers. 00:33:44.406 ======================================================== 00:33:44.406 Latency(us) 00:33:44.406 Device Information : IOPS MiB/s Average min max 00:33:44.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3129.74 391.22 10232.63 581.08 24271.07 00:33:44.406 ======================================================== 00:33:44.406 Total : 3129.74 391.22 10232.63 581.08 24271.07 00:33:44.406 00:33:44.406 10:49:49 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:44.406 10:49:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:44.406 10:49:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:44.406 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.389 Initializing NVMe Controllers 00:33:54.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.389 Controller IO queue size 128, less than required. 00:33:54.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:54.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:54.389 Initialization complete. Launching workers. 00:33:54.389 ======================================================== 00:33:54.389 Latency(us) 00:33:54.389 Device Information : IOPS MiB/s Average min max 00:33:54.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15921.89 7.77 8040.17 3053.66 15844.14 00:33:54.389 ======================================================== 00:33:54.389 Total : 15921.89 7.77 8040.17 3053.66 15844.14 00:33:54.389 00:33:54.389 10:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:54.389 10:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.389 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.379 Initializing NVMe Controllers 00:34:04.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:04.379 Controller IO queue size 128, less than required. 00:34:04.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:04.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:04.379 Initialization complete. Launching workers. 00:34:04.379 ======================================================== 00:34:04.379 Latency(us) 00:34:04.379 Device Information : IOPS MiB/s Average min max 00:34:04.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1169.10 146.14 109980.68 15546.26 223873.11 00:34:04.379 ======================================================== 00:34:04.379 Total : 1169.10 146.14 109980.68 15546.26 223873.11 00:34:04.379 00:34:04.379 10:50:09 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.379 10:50:09 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 526f8f59-dc01-4100-9cb1-d79f1712b1e4 00:34:06.286 10:50:11 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:06.286 10:50:11 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 101807c4-3580-44d5-81ec-942f497c3481 00:34:06.286 10:50:11 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:06.546 rmmod nvme_tcp 00:34:06.546 rmmod nvme_fabrics 00:34:06.546 rmmod nvme_keyring 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2155746 ']' 00:34:06.546 10:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2155746 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2155746 ']' 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2155746 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2155746 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2155746' 00:34:06.547 killing process with pid 2155746 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2155746 00:34:06.547 10:50:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2155746 00:34:08.450 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:08.450 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:08.708 10:50:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.614 10:50:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:10.614 00:34:10.614 real 1m32.757s 00:34:10.614 user 5m25.621s 00:34:10.614 sys 0m15.083s 00:34:10.614 10:50:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.614 10:50:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.614 ************************************ 00:34:10.614 END TEST nvmf_perf 00:34:10.614 ************************************ 00:34:10.614 10:50:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:10.614 10:50:16 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:10.614 10:50:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:10.614 10:50:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.614 10:50:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:10.614 ************************************ 00:34:10.614 START TEST nvmf_fio_host 00:34:10.614 ************************************ 00:34:10.614 10:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:10.874 * Looking for test storage... 00:34:10.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:10.874 10:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:18.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:18.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.997 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:18.998 Found net devices under 0000:31:00.0: cvl_0_0 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:18.998 Found net devices under 0000:31:00.1: cvl_0_1 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:18.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:34:18.998 00:34:18.998 --- 10.0.0.2 ping statistics --- 00:34:18.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.998 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:34:18.998 00:34:18.998 --- 10.0.0.1 ping statistics --- 00:34:18.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.998 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2176084 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2176084 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2176084 ']' 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:18.998 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.998 [2024-07-22 10:50:24.669823] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:34:18.998 [2024-07-22 10:50:24.669872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.257 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.257 [2024-07-22 10:50:24.740021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.258 [2024-07-22 10:50:24.772280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.258 [2024-07-22 10:50:24.772317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.258 [2024-07-22 10:50:24.772326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.258 [2024-07-22 10:50:24.772332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.258 [2024-07-22 10:50:24.772339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.258 [2024-07-22 10:50:24.772481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.258 [2024-07-22 10:50:24.772668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.258 [2024-07-22 10:50:24.772823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.258 [2024-07-22 10:50:24.772824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.258 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:19.258 10:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:34:19.258 10:50:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.517 [2024-07-22 10:50:25.003603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.517 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:19.517 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:19.517 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.517 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:19.776 Malloc1 00:34:19.776 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:19.776 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:20.035 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.035 [2024-07-22 10:50:25.713030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:20.295 10:50:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.922 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:20.922 fio-3.35 00:34:20.922 Starting 1 thread 00:34:20.922 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.512 00:34:23.512 test: (groupid=0, jobs=1): err= 0: pid=2176643: Mon Jul 22 10:50:28 2024 00:34:23.512 read: IOPS=9618, BW=37.6MiB/s (39.4MB/s)(75.4MiB/2006msec) 00:34:23.512 slat (usec): min=2, max=279, avg= 2.23, stdev= 2.84 00:34:23.512 clat (usec): min=3630, max=12464, avg=7331.88, stdev=536.66 00:34:23.512 lat (usec): min=3664, max=12467, avg=7334.11, stdev=536.51 00:34:23.512 clat percentiles (usec): 00:34:23.512 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:34:23.512 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:34:23.512 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:34:23.512 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9765], 99.95th=[11600], 00:34:23.512 | 99.99th=[12387] 00:34:23.512 bw ( KiB/s): min=37352, max=39160, per=99.92%, avg=38440.00, stdev=786.64, samples=4 00:34:23.512 iops : min= 9338, max= 9790, avg=9610.00, stdev=196.66, samples=4 00:34:23.512 write: IOPS=9617, BW=37.6MiB/s (39.4MB/s)(75.4MiB/2006msec); 0 zone resets 00:34:23.512 slat (usec): min=2, max=270, avg= 2.32, stdev= 2.15 00:34:23.512 clat (usec): min=2887, max=11518, avg=5893.12, stdev=445.55 00:34:23.512 lat (usec): min=2905, max=11520, avg=5895.44, stdev=445.50 00:34:23.512 clat percentiles (usec): 00:34:23.512 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:34:23.512 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:34:23.512 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6587], 00:34:23.512 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8848], 99.95th=[10159], 00:34:23.512 | 99.99th=[11469] 00:34:23.512 bw ( KiB/s): min=38080, max=38872, per=100.00%, avg=38480.00, stdev=345.21, samples=4 00:34:23.512 iops : min= 9520, max= 9718, avg=9620.00, stdev=86.30, samples=4 00:34:23.512 lat (msec) : 4=0.05%, 10=99.87%, 20=0.08% 00:34:23.512 cpu : usr=73.72%, sys=24.99%, ctx=33, majf=0, minf=6 00:34:23.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:23.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.512 issued rwts: total=19294,19293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.512 00:34:23.512 Run status group 0 (all jobs): 00:34:23.512 READ: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.4MiB (79.0MB), run=2006-2006msec 00:34:23.512 WRITE: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.4MiB (79.0MB), run=2006-2006msec 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:23.512 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:23.512 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:23.512 fio-3.35 00:34:23.512 Starting 1 thread 00:34:23.512 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.056 00:34:26.056 test: (groupid=0, jobs=1): err= 0: pid=2177204: Mon Jul 22 10:50:31 2024 00:34:26.056 read: IOPS=9241, BW=144MiB/s (151MB/s)(290MiB/2006msec) 00:34:26.056 slat (usec): min=3, max=109, avg= 3.67, stdev= 1.61 00:34:26.056 clat (usec): min=2113, max=17686, avg=8359.32, stdev=2004.26 00:34:26.056 lat (usec): min=2117, max=17689, avg=8362.99, stdev=2004.38 00:34:26.056 clat percentiles (usec): 00:34:26.056 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6587], 00:34:26.056 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:34:26.056 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10814], 95.00th=[11600], 00:34:26.056 | 99.00th=[13304], 99.50th=[14353], 99.90th=[16909], 99.95th=[17171], 00:34:26.056 | 99.99th=[17695] 00:34:26.056 bw ( KiB/s): min=67648, max=82272, per=49.37%, avg=72992.00, stdev=6444.27, samples=4 00:34:26.056 iops : min= 4228, max= 5142, avg=4562.00, stdev=402.77, samples=4 00:34:26.056 write: IOPS=5452, BW=85.2MiB/s (89.3MB/s)(149MiB/1751msec); 0 zone resets 00:34:26.056 slat (usec): min=40, max=399, avg=41.30, stdev= 8.13 00:34:26.056 clat (usec): min=3211, max=15976, avg=9529.49, stdev=1508.75 00:34:26.056 lat (usec): min=3251, max=16095, avg=9570.79, stdev=1510.71 00:34:26.056 clat percentiles (usec): 00:34:26.057 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8291], 00:34:26.057 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:34:26.057 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11338], 95.00th=[12256], 00:34:26.057 | 99.00th=[14091], 99.50th=[15008], 99.90th=[15664], 99.95th=[15926], 00:34:26.057 | 99.99th=[15926] 00:34:26.057 bw ( KiB/s): min=68896, max=85984, per=86.88%, avg=75792.00, stdev=7231.03, samples=4 00:34:26.057 iops : min= 4306, max= 5374, avg=4737.00, stdev=451.94, samples=4 00:34:26.057 lat (msec) : 4=0.45%, 10=73.21%, 20=26.34% 00:34:26.057 cpu : usr=83.14%, sys=14.81%, ctx=23, majf=0, minf=29 00:34:26.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:34:26.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:26.057 issued rwts: total=18538,9547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:26.057 00:34:26.057 Run status group 0 (all jobs): 00:34:26.057 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=290MiB (304MB), run=2006-2006msec 00:34:26.057 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=149MiB (156MB), run=1751-1751msec 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:34:26.057 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:34:26.317 Nvme0n1 00:34:26.577 10:50:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=80e03f9b-3e59-4854-a932-b74334212fb2 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 80e03f9b-3e59-4854-a932-b74334212fb2 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=80e03f9b-3e59-4854-a932-b74334212fb2 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:27.148 { 00:34:27.148 "uuid": "80e03f9b-3e59-4854-a932-b74334212fb2", 00:34:27.148 "name": "lvs_0", 00:34:27.148 "base_bdev": "Nvme0n1", 00:34:27.148 "total_data_clusters": 1787, 00:34:27.148 "free_clusters": 1787, 00:34:27.148 "block_size": 512, 00:34:27.148 "cluster_size": 1073741824 00:34:27.148 } 00:34:27.148 ]' 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="80e03f9b-3e59-4854-a932-b74334212fb2") .free_clusters' 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:34:27.148 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="80e03f9b-3e59-4854-a932-b74334212fb2") .cluster_size' 00:34:27.409 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:34:27.409 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:34:27.409 10:50:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:34:27.409 1829888 00:34:27.409 10:50:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:34:27.409 317aa761-d402-44ca-8583-89900a7044fd 00:34:27.409 10:50:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:27.670 10:50:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:27.670 10:50:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:27.930 10:50:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:28.499 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:28.499 fio-3.35 00:34:28.499 Starting 1 thread 00:34:28.499 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.047 00:34:31.047 test: (groupid=0, jobs=1): err= 0: pid=2178399: Mon Jul 22 10:50:36 2024 00:34:31.047 read: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(82.2MiB/2005msec) 00:34:31.047 slat (usec): min=2, max=114, avg= 2.25, stdev= 1.13 00:34:31.047 clat (usec): min=2325, max=11470, avg=6718.09, stdev=506.87 00:34:31.047 lat (usec): min=2342, max=11472, avg=6720.34, stdev=506.80 00:34:31.047 clat percentiles (usec): 00:34:31.047 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:34:31.047 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:34:31.047 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:34:31.047 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 9634], 99.95th=[10814], 00:34:31.047 | 99.99th=[11076] 00:34:31.047 bw ( KiB/s): min=40448, max=42648, per=99.88%, avg=41906.00, stdev=1014.80, samples=4 00:34:31.047 iops : min=10112, max=10662, avg=10476.50, stdev=253.70, samples=4 00:34:31.047 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(82.1MiB/2005msec); 0 zone resets 00:34:31.047 slat (nsec): min=2157, max=107010, avg=2347.88, stdev=780.99 00:34:31.047 clat (usec): min=1418, max=9509, avg=5379.49, stdev=428.47 00:34:31.047 lat (usec): min=1426, max=9511, avg=5381.84, stdev=428.45 00:34:31.047 clat percentiles (usec): 00:34:31.047 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5014], 00:34:31.047 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5473], 00:34:31.047 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 6063], 00:34:31.047 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 7963], 99.95th=[ 8848], 00:34:31.047 | 99.99th=[ 9372] 00:34:31.047 bw ( KiB/s): min=41008, max=42560, per=100.00%, avg=41952.00, stdev=662.67, samples=4 00:34:31.047 iops : min=10252, max=10640, avg=10488.00, stdev=165.67, samples=4 00:34:31.047 lat (msec) : 2=0.02%, 4=0.10%, 10=99.84%, 20=0.05% 00:34:31.047 cpu : usr=72.26%, sys=26.10%, ctx=41, majf=0, minf=15 00:34:31.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:31.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.047 issued rwts: total=21031,21025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.047 00:34:31.047 Run status group 0 (all jobs): 00:34:31.047 READ: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=82.2MiB (86.1MB), run=2005-2005msec 00:34:31.047 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=82.1MiB (86.1MB), run=2005-2005msec 00:34:31.047 10:50:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:31.047 10:50:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e6598879-b338-4226-a4ab-e32a68c2cc72 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e6598879-b338-4226-a4ab-e32a68c2cc72 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e6598879-b338-4226-a4ab-e32a68c2cc72 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:31.991 { 00:34:31.991 "uuid": "80e03f9b-3e59-4854-a932-b74334212fb2", 00:34:31.991 "name": "lvs_0", 00:34:31.991 "base_bdev": "Nvme0n1", 00:34:31.991 "total_data_clusters": 1787, 00:34:31.991 "free_clusters": 0, 00:34:31.991 "block_size": 512, 00:34:31.991 "cluster_size": 1073741824 00:34:31.991 }, 00:34:31.991 { 00:34:31.991 "uuid": "e6598879-b338-4226-a4ab-e32a68c2cc72", 00:34:31.991 "name": "lvs_n_0", 00:34:31.991 "base_bdev": "317aa761-d402-44ca-8583-89900a7044fd", 00:34:31.991 "total_data_clusters": 457025, 00:34:31.991 "free_clusters": 457025, 00:34:31.991 "block_size": 512, 00:34:31.991 "cluster_size": 4194304 00:34:31.991 } 00:34:31.991 ]' 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e6598879-b338-4226-a4ab-e32a68c2cc72") .free_clusters' 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e6598879-b338-4226-a4ab-e32a68c2cc72") .cluster_size' 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:34:31.991 1828100 00:34:31.991 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:33.376 42395c42-5b93-4198-b5a0-db67a2337597 00:34:33.376 10:50:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:33.376 10:50:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:33.376 10:50:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:33.637 10:50:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:33.897 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:33.897 fio-3.35 00:34:33.897 Starting 1 thread 00:34:33.897 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.439 00:34:36.439 test: (groupid=0, jobs=1): err= 0: pid=2179576: Mon Jul 22 10:50:41 2024 00:34:36.439 read: IOPS=9051, BW=35.4MiB/s (37.1MB/s)(70.9MiB/2006msec) 00:34:36.439 slat (usec): min=2, max=110, avg= 2.28, stdev= 1.11 00:34:36.439 clat (usec): min=2124, max=12618, avg=7818.68, stdev=592.80 00:34:36.439 lat (usec): min=2141, max=12620, avg=7820.96, stdev=592.73 00:34:36.439 clat percentiles (usec): 00:34:36.439 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:34:36.439 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:34:36.439 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:34:36.439 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9896], 99.95th=[11338], 00:34:36.439 | 99.99th=[12518] 00:34:36.439 bw ( KiB/s): min=35072, max=36968, per=99.90%, avg=36170.00, stdev=795.36, samples=4 00:34:36.439 iops : min= 8768, max= 9242, avg=9042.50, stdev=198.84, samples=4 00:34:36.439 write: IOPS=9065, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:34:36.439 slat (nsec): min=2165, max=119170, avg=2371.82, stdev=923.23 00:34:36.439 clat (usec): min=1107, max=11405, avg=6236.02, stdev=517.70 00:34:36.439 lat (usec): min=1115, max=11407, avg=6238.39, stdev=517.67 00:34:36.439 clat percentiles (usec): 00:34:36.439 | 1.00th=[ 5014], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:34:36.439 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:34:36.439 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:34:36.439 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9765], 99.95th=[10421], 00:34:36.439 | 99.99th=[11338] 00:34:36.439 bw ( KiB/s): min=35920, max=36544, per=100.00%, avg=36260.00, stdev=300.86, samples=4 00:34:36.439 iops : min= 8980, max= 9136, avg=9065.00, stdev=75.22, samples=4 00:34:36.439 lat (msec) : 2=0.01%, 4=0.09%, 10=99.81%, 20=0.08% 00:34:36.439 cpu : usr=72.72%, sys=25.74%, ctx=58, majf=0, minf=15 00:34:36.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:36.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:36.439 issued rwts: total=18157,18185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:36.439 00:34:36.439 Run status group 0 (all jobs): 00:34:36.439 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.4MB), run=2006-2006msec 00:34:36.439 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2006-2006msec 00:34:36.439 10:50:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:36.699 10:50:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:36.699 10:50:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:38.612 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:38.872 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:39.440 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:39.440 10:50:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:41.980 rmmod nvme_tcp 00:34:41.980 rmmod nvme_fabrics 00:34:41.980 rmmod nvme_keyring 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2176084 ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2176084 ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2176084' 00:34:41.980 killing process with pid 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2176084 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:41.980 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:41.981 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:41.981 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:41.981 10:50:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.981 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.981 10:50:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.890 10:50:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:43.890 00:34:43.890 real 0m33.162s 00:34:43.890 user 2m34.573s 00:34:43.890 sys 0m10.410s 00:34:43.890 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.890 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.890 ************************************ 00:34:43.890 END TEST nvmf_fio_host 00:34:43.890 ************************************ 00:34:43.890 10:50:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:43.890 10:50:49 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:43.890 10:50:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:43.890 10:50:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:43.890 10:50:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.890 ************************************ 00:34:43.890 START TEST nvmf_failover 00:34:43.890 ************************************ 00:34:43.890 10:50:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:44.150 * Looking for test storage... 00:34:44.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.150 10:50:49 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:34:44.151 10:50:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.300 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:52.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:52.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:52.301 Found net devices under 0000:31:00.0: cvl_0_0 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:52.301 Found net devices under 0000:31:00.1: cvl_0_1 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:52.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:34:52.301 00:34:52.301 --- 10.0.0.2 ping statistics --- 00:34:52.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.301 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:34:52.301 00:34:52.301 --- 10.0.0.1 ping statistics --- 00:34:52.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.301 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2185597 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2185597 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2185597 ']' 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:52.301 10:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:52.301 [2024-07-22 10:50:57.612059] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:34:52.301 [2024-07-22 10:50:57.612107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.301 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.301 [2024-07-22 10:50:57.698665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:52.301 [2024-07-22 10:50:57.730904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.301 [2024-07-22 10:50:57.730941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.301 [2024-07-22 10:50:57.730948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.301 [2024-07-22 10:50:57.730955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.301 [2024-07-22 10:50:57.730961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.301 [2024-07-22 10:50:57.731070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:52.301 [2024-07-22 10:50:57.731217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.301 [2024-07-22 10:50:57.731218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.870 10:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:52.870 [2024-07-22 10:50:58.568822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.129 10:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:53.129 Malloc0 00:34:53.130 10:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:53.389 10:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:53.648 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.648 [2024-07-22 10:50:59.242833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.648 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:53.908 [2024-07-22 10:50:59.403270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:53.908 [2024-07-22 10:50:59.563768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2185962 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2185962 /var/tmp/bdevperf.sock 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2185962 ']' 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:53.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:53.908 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:54.186 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:54.186 10:50:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:54.186 10:50:59 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:54.445 NVMe0n1 00:34:54.445 10:51:00 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:54.705 00:34:54.705 10:51:00 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2186253 00:34:54.705 10:51:00 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:54.705 10:51:00 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:55.643 10:51:01 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.902 [2024-07-22 10:51:01.465596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.902 [2024-07-22 10:51:01.465689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 [2024-07-22 10:51:01.465769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f420 is same with the state(5) to be set 00:34:55.903 10:51:01 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:59.194 10:51:04 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:59.194 00:34:59.194 10:51:04 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:59.454 [2024-07-22 10:51:04.908067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 [2024-07-22 10:51:04.908163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e701e0 is same with the state(5) to be set 00:34:59.454 10:51:04 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:02.829 10:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.829 [2024-07-22 10:51:08.086492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.829 10:51:08 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:03.770 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:03.770 [2024-07-22 10:51:09.264564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 [2024-07-22 10:51:09.264704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71590 is same with the state(5) to be set 00:35:03.770 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2186253 00:35:10.348 0 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2185962 ']' 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185962' 00:35:10.348 killing process with pid 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2185962 00:35:10.348 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:10.348 [2024-07-22 10:50:59.629165] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:35:10.348 [2024-07-22 10:50:59.629221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185962 ] 00:35:10.348 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.348 [2024-07-22 10:50:59.693998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.348 [2024-07-22 10:50:59.724907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.348 Running I/O for 15 seconds... 00:35:10.348 [2024-07-22 10:51:01.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.467997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.348 [2024-07-22 10:51:01.468265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.348 [2024-07-22 10:51:01.468283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.348 [2024-07-22 10:51:01.468292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.348 [2024-07-22 10:51:01.468299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.468992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.468999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.349 [2024-07-22 10:51:01.469383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.349 [2024-07-22 10:51:01.469403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.349 [2024-07-22 10:51:01.469419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.349 [2024-07-22 10:51:01.469440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.349 [2024-07-22 10:51:01.469457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.349 [2024-07-22 10:51:01.469473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.349 [2024-07-22 10:51:01.469500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.349 [2024-07-22 10:51:01.469507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:35:10.349 [2024-07-22 10:51:01.469515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469551] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x186a860 was disconnected and freed. reset controller. 00:35:10.349 [2024-07-22 10:51:01.469561] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:10.349 [2024-07-22 10:51:01.469580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.349 [2024-07-22 10:51:01.469589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.349 [2024-07-22 10:51:01.469598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:01.469606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:01.469614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:01.469621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:01.469629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:01.469637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:01.469645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:10.350 [2024-07-22 10:51:01.469674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d930 (9): Bad file descriptor 00:35:10.350 [2024-07-22 10:51:01.473215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:10.350 [2024-07-22 10:51:01.639671] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:10.350 [2024-07-22 10:51:04.908329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:04.908367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:04.908386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:04.908414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.350 [2024-07-22 10:51:04.908429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183d930 is same with the state(5) to be set 00:35:10.350 [2024-07-22 10:51:04.908500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.908961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.908987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.908996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.909012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.909028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.909044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.909061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.350 [2024-07-22 10:51:04.909077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.350 [2024-07-22 10:51:04.909404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.350 [2024-07-22 10:51:04.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.909991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.351 [2024-07-22 10:51:04.910612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.351 [2024-07-22 10:51:04.910637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.351 [2024-07-22 10:51:04.910644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51648 len:8 PRP1 0x0 PRP2 0x0 00:35:10.351 [2024-07-22 10:51:04.910651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.351 [2024-07-22 10:51:04.910687] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x186cab0 was disconnected and freed. reset controller. 00:35:10.351 [2024-07-22 10:51:04.910697] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:10.352 [2024-07-22 10:51:04.910705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:10.352 [2024-07-22 10:51:04.914245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:10.352 [2024-07-22 10:51:04.914268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d930 (9): Bad file descriptor 00:35:10.352 [2024-07-22 10:51:04.951027] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:10.352 [2024-07-22 10:51:09.265731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.265983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.265993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.352 [2024-07-22 10:51:09.266923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.352 [2024-07-22 10:51:09.266932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.266942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.266949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.266958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.266964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.266974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.266981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.266991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.266998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.353 [2024-07-22 10:51:09.267721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65472 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65480 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65488 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65496 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65504 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65512 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65520 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65528 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.267977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.267986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.267991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.267998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65544 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.268004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.353 [2024-07-22 10:51:09.268017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.353 [2024-07-22 10:51:09.268023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:35:10.353 [2024-07-22 10:51:09.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268063] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1879990 was disconnected and freed. reset controller. 00:35:10.353 [2024-07-22 10:51:09.268073] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:10.353 [2024-07-22 10:51:09.268092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.353 [2024-07-22 10:51:09.268101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.353 [2024-07-22 10:51:09.268116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.353 [2024-07-22 10:51:09.268130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.353 [2024-07-22 10:51:09.268145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.353 [2024-07-22 10:51:09.268153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:10.353 [2024-07-22 10:51:09.268185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d930 (9): Bad file descriptor 00:35:10.353 [2024-07-22 10:51:09.271729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:10.354 [2024-07-22 10:51:09.402530] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:10.354 00:35:10.354 Latency(us) 00:35:10.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.354 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:10.354 Verification LBA range: start 0x0 length 0x4000 00:35:10.354 NVMe0n1 : 15.01 11171.47 43.64 788.58 0.00 10674.58 505.17 15073.28 00:35:10.354 =================================================================================================================== 00:35:10.354 Total : 11171.47 43.64 788.58 0.00 10674.58 505.17 15073.28 00:35:10.354 Received shutdown signal, test time was about 15.000000 seconds 00:35:10.354 00:35:10.354 Latency(us) 00:35:10.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.354 =================================================================================================================== 00:35:10.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2189549 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2189549 /var/tmp/bdevperf.sock 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2189549 ']' 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:10.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:35:10.354 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:10.354 [2024-07-22 10:51:16.018944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:10.613 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:10.613 [2024-07-22 10:51:16.191322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:10.613 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:10.871 NVMe0n1 00:35:10.871 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:11.130 00:35:11.130 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:11.388 00:35:11.388 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:11.388 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:11.647 10:51:17 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:11.647 10:51:17 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:14.939 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:14.939 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:14.939 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:14.939 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2190549 00:35:14.939 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2190549 00:35:15.882 0 00:35:15.882 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:15.882 [2024-07-22 10:51:15.701864] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:35:15.882 [2024-07-22 10:51:15.701925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189549 ] 00:35:15.882 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.882 [2024-07-22 10:51:15.767294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.882 [2024-07-22 10:51:15.796912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.882 [2024-07-22 10:51:17.269512] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:15.882 [2024-07-22 10:51:17.269558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.882 [2024-07-22 10:51:17.269570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.882 [2024-07-22 10:51:17.269580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.882 [2024-07-22 10:51:17.269588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.882 [2024-07-22 10:51:17.269596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.882 [2024-07-22 10:51:17.269603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.882 [2024-07-22 10:51:17.269610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.882 [2024-07-22 10:51:17.269617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.882 [2024-07-22 10:51:17.269624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.882 [2024-07-22 10:51:17.269652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.882 [2024-07-22 10:51:17.269666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1160930 (9): Bad file descriptor 00:35:15.882 [2024-07-22 10:51:17.280269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:15.882 Running I/O for 1 seconds... 00:35:15.882 00:35:15.882 Latency(us) 00:35:15.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:15.882 Verification LBA range: start 0x0 length 0x4000 00:35:15.882 NVMe0n1 : 1.00 11246.36 43.93 0.00 0.00 11317.37 1249.28 9448.11 00:35:15.882 =================================================================================================================== 00:35:15.882 Total : 11246.36 43.93 0.00 0.00 11317.37 1249.28 9448.11 00:35:16.143 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:16.143 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:16.143 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:16.404 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:16.404 10:51:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:16.404 10:51:22 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:16.664 10:51:22 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2189549 ']' 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2189549' 00:35:19.962 killing process with pid 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2189549 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:19.962 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:20.223 rmmod nvme_tcp 00:35:20.223 rmmod nvme_fabrics 00:35:20.223 rmmod nvme_keyring 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2185597 ']' 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2185597 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2185597 ']' 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2185597 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185597 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185597' 00:35:20.223 killing process with pid 2185597 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2185597 00:35:20.223 10:51:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2185597 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:20.499 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.413 10:51:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:22.413 00:35:22.413 real 0m38.552s 00:35:22.413 user 1m55.683s 00:35:22.413 sys 0m8.460s 00:35:22.413 10:51:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:22.413 10:51:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.413 ************************************ 00:35:22.413 END TEST nvmf_failover 00:35:22.413 ************************************ 00:35:22.674 10:51:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:22.674 10:51:28 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:22.674 10:51:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:22.674 10:51:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.674 10:51:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.674 ************************************ 00:35:22.674 START TEST nvmf_host_discovery 00:35:22.674 ************************************ 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:22.674 * Looking for test storage... 00:35:22.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.674 10:51:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:35:22.675 10:51:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:30.825 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:30.825 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:30.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:30.826 Found net devices under 0000:31:00.0: cvl_0_0 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:30.826 Found net devices under 0000:31:00.1: cvl_0_1 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:30.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:30.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:35:30.826 00:35:30.826 --- 10.0.0.2 ping statistics --- 00:35:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.826 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:30.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:30.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:35:30.826 00:35:30.826 --- 10.0.0.1 ping statistics --- 00:35:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.826 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2196235 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2196235 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2196235 ']' 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.826 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:30.827 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.827 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:30.827 10:51:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.087 [2024-07-22 10:51:36.552320] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:35:31.087 [2024-07-22 10:51:36.552386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.087 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.087 [2024-07-22 10:51:36.646783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.087 [2024-07-22 10:51:36.693635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.087 [2024-07-22 10:51:36.693686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.087 [2024-07-22 10:51:36.693694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.087 [2024-07-22 10:51:36.693701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.087 [2024-07-22 10:51:36.693707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.087 [2024-07-22 10:51:36.693738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.656 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:31.656 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:31.656 10:51:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:31.656 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:31.656 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 [2024-07-22 10:51:37.395084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 [2024-07-22 10:51:37.407302] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 null0 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 null1 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2196342 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2196342 /tmp/host.sock 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2196342 ']' 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:31.916 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:31.916 10:51:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.916 [2024-07-22 10:51:37.502560] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:35:31.916 [2024-07-22 10:51:37.502622] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196342 ] 00:35:31.916 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.916 [2024-07-22 10:51:37.574337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.916 [2024-07-22 10:51:37.613088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:32.857 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.145 [2024-07-22 10:51:38.634404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.145 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:33.146 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.408 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:35:33.408 10:51:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:33.669 [2024-07-22 10:51:39.293326] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:33.669 [2024-07-22 10:51:39.293347] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:33.669 [2024-07-22 10:51:39.293362] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:33.928 [2024-07-22 10:51:39.382664] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:33.928 [2024-07-22 10:51:39.569332] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:33.928 [2024-07-22 10:51:39.569353] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:34.189 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.449 10:51:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:34.449 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.711 [2024-07-22 10:51:40.198526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:34.711 [2024-07-22 10:51:40.199591] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:34.711 [2024-07-22 10:51:40.199616] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:34.711 [2024-07-22 10:51:40.328016] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:34.711 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:34.711 [2024-07-22 10:51:40.388665] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:34.711 [2024-07-22 10:51:40.388681] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:34.711 [2024-07-22 10:51:40.388686] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.099 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.099 [2024-07-22 10:51:41.478954] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:36.099 [2024-07-22 10:51:41.478978] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:36.099 [2024-07-22 10:51:41.481386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.099 [2024-07-22 10:51:41.481409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.099 [2024-07-22 10:51:41.481419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.099 [2024-07-22 10:51:41.481427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.100 [2024-07-22 10:51:41.481435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.100 [2024-07-22 10:51:41.481442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.100 [2024-07-22 10:51:41.481450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.100 [2024-07-22 10:51:41.481457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.100 [2024-07-22 10:51:41.481472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:36.100 [2024-07-22 10:51:41.491403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.501438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.501781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.501820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.501831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.501850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.501862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.501870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.501878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.501893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.100 [2024-07-22 10:51:41.511495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.511891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.511904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.511912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.511923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.511933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.511940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.511947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.511962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 [2024-07-22 10:51:41.521551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.521921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.521933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.521941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.521952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.521962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.521968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.521975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.521985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 [2024-07-22 10:51:41.531603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.531939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.531952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.531959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.531970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.531980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.531987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.531993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.532004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:36.100 [2024-07-22 10:51:41.541658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.541984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.541997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.542004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.542016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.542029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.542036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.542044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.542055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.100 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:36.100 [2024-07-22 10:51:41.551710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.551879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.551894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.551902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.551913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.551923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.551929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.551936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.551947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 [2024-07-22 10:51:41.561765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:36.100 [2024-07-22 10:51:41.562125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.100 [2024-07-22 10:51:41.562138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e74560 with addr=10.0.0.2, port=4420 00:35:36.100 [2024-07-22 10:51:41.562145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74560 is same with the state(5) to be set 00:35:36.100 [2024-07-22 10:51:41.562156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74560 (9): Bad file descriptor 00:35:36.100 [2024-07-22 10:51:41.562166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.100 [2024-07-22 10:51:41.562172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:36.100 [2024-07-22 10:51:41.562179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.100 [2024-07-22 10:51:41.562190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.100 [2024-07-22 10:51:41.567497] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:36.100 [2024-07-22 10:51:41.567515] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:36.101 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.362 10:51:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.306 [2024-07-22 10:51:42.872460] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:37.306 [2024-07-22 10:51:42.872479] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:37.306 [2024-07-22 10:51:42.872492] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:37.306 [2024-07-22 10:51:42.960780] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:37.566 [2024-07-22 10:51:43.027561] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:37.566 [2024-07-22 10:51:43.027591] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 request: 00:35:37.566 { 00:35:37.566 "name": "nvme", 00:35:37.566 "trtype": "tcp", 00:35:37.566 "traddr": "10.0.0.2", 00:35:37.566 "adrfam": "ipv4", 00:35:37.566 "trsvcid": "8009", 00:35:37.566 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:37.566 "wait_for_attach": true, 00:35:37.566 "method": "bdev_nvme_start_discovery", 00:35:37.566 "req_id": 1 00:35:37.566 } 00:35:37.566 Got JSON-RPC error response 00:35:37.566 response: 00:35:37.566 { 00:35:37.566 "code": -17, 00:35:37.566 "message": "File exists" 00:35:37.566 } 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 request: 00:35:37.566 { 00:35:37.566 "name": "nvme_second", 00:35:37.566 "trtype": "tcp", 00:35:37.566 "traddr": "10.0.0.2", 00:35:37.566 "adrfam": "ipv4", 00:35:37.566 "trsvcid": "8009", 00:35:37.566 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:37.566 "wait_for_attach": true, 00:35:37.566 "method": "bdev_nvme_start_discovery", 00:35:37.566 "req_id": 1 00:35:37.566 } 00:35:37.566 Got JSON-RPC error response 00:35:37.566 response: 00:35:37.566 { 00:35:37.566 "code": -17, 00:35:37.566 "message": "File exists" 00:35:37.566 } 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:37.566 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.826 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:38.766 [2024-07-22 10:51:44.295501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.766 [2024-07-22 10:51:44.295530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e91020 with addr=10.0.0.2, port=8010 00:35:38.766 [2024-07-22 10:51:44.295542] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:38.766 [2024-07-22 10:51:44.295549] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:38.766 [2024-07-22 10:51:44.295556] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:39.705 [2024-07-22 10:51:45.297856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.705 [2024-07-22 10:51:45.297882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e91020 with addr=10.0.0.2, port=8010 00:35:39.705 [2024-07-22 10:51:45.297893] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:39.705 [2024-07-22 10:51:45.297901] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:39.705 [2024-07-22 10:51:45.297907] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:40.674 [2024-07-22 10:51:46.299860] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:40.674 request: 00:35:40.674 { 00:35:40.674 "name": "nvme_second", 00:35:40.674 "trtype": "tcp", 00:35:40.674 "traddr": "10.0.0.2", 00:35:40.674 "adrfam": "ipv4", 00:35:40.674 "trsvcid": "8010", 00:35:40.674 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:40.674 "wait_for_attach": false, 00:35:40.674 "attach_timeout_ms": 3000, 00:35:40.674 "method": "bdev_nvme_start_discovery", 00:35:40.674 "req_id": 1 00:35:40.674 } 00:35:40.674 Got JSON-RPC error response 00:35:40.674 response: 00:35:40.674 { 00:35:40.674 "code": -110, 00:35:40.674 "message": "Connection timed out" 00:35:40.674 } 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:40.674 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2196342 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:40.982 rmmod nvme_tcp 00:35:40.982 rmmod nvme_fabrics 00:35:40.982 rmmod nvme_keyring 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2196235 ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2196235 ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2196235' 00:35:40.982 killing process with pid 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2196235 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:40.982 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:43.529 00:35:43.529 real 0m20.472s 00:35:43.529 user 0m23.025s 00:35:43.529 sys 0m7.454s 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.529 ************************************ 00:35:43.529 END TEST nvmf_host_discovery 00:35:43.529 ************************************ 00:35:43.529 10:51:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:43.529 10:51:48 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:43.529 10:51:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:43.529 10:51:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:43.529 10:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.529 ************************************ 00:35:43.529 START TEST nvmf_host_multipath_status 00:35:43.529 ************************************ 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:43.529 * Looking for test storage... 00:35:43.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:35:43.529 10:51:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:51.670 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:51.670 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:51.670 Found net devices under 0000:31:00.0: cvl_0_0 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:51.670 Found net devices under 0000:31:00.1: cvl_0_1 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:51.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:35:51.670 00:35:51.670 --- 10.0.0.2 ping statistics --- 00:35:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.670 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:35:51.670 00:35:51.670 --- 10.0.0.1 ping statistics --- 00:35:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.670 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:51.670 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2202793 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2202793 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2202793 ']' 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:51.671 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:51.671 [2024-07-22 10:51:56.812727] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:35:51.671 [2024-07-22 10:51:56.812777] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.671 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.671 [2024-07-22 10:51:56.882621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:51.671 [2024-07-22 10:51:56.913679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.671 [2024-07-22 10:51:56.913713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.671 [2024-07-22 10:51:56.913721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.671 [2024-07-22 10:51:56.913728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.671 [2024-07-22 10:51:56.913734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.671 [2024-07-22 10:51:56.913873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.671 [2024-07-22 10:51:56.913874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2202793 00:35:51.931 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:52.191 [2024-07-22 10:51:57.742873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.191 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:52.451 Malloc0 00:35:52.451 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:52.452 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.711 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.971 [2024-07-22 10:51:58.419910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:52.971 [2024-07-22 10:51:58.580324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2203147 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2203147 /var/tmp/bdevperf.sock 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2203147 ']' 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:52.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.971 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:53.231 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.231 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:53.231 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:53.492 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:53.753 Nvme0n1 00:35:53.753 10:51:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:54.324 Nvme0n1 00:35:54.324 10:51:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:54.324 10:51:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:56.237 10:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:56.237 10:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:56.497 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:56.756 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:57.695 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:57.696 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:57.696 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.696 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:58.214 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.214 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:58.214 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.214 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:58.475 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.475 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:58.475 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.475 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:58.475 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.475 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:58.475 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.475 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:58.735 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.735 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:58.735 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:58.735 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:58.995 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:59.932 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:59.932 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:59.932 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.932 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:00.192 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:00.192 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:00.192 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.192 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:00.452 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.452 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:00.452 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.452 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:00.452 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.452 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:00.452 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.452 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.712 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:00.971 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.971 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:00.971 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:01.231 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:01.231 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:02.614 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:02.614 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:02.614 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.614 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.614 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:02.876 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:02.876 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:02.876 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:02.876 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.135 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:03.394 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.394 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:03.394 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:03.652 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:03.652 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:04.588 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:04.588 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:04.588 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.588 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:04.847 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:04.847 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:04.847 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.847 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:05.107 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.379 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.379 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:05.379 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.379 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:05.649 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:05.907 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:06.165 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:07.102 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:07.102 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:07.102 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.102 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.361 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.621 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:07.881 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:07.881 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:07.881 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.881 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:08.139 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:08.139 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:36:08.139 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:08.139 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:08.398 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:09.332 10:52:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:09.332 10:52:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:09.332 10:52:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.332 10:52:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:09.592 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:09.592 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:09.592 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.592 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.851 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:10.110 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.110 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:10.110 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.111 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:10.371 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:10.371 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:10.371 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.371 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:10.371 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.371 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:10.631 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:10.631 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:10.891 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:10.891 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.270 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.271 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.530 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:12.789 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.789 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:12.789 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.789 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:13.047 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:13.047 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:13.047 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:13.047 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:13.306 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:14.244 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:14.244 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:14.244 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.244 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:14.506 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:14.506 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:14.506 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.506 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.765 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:15.116 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:15.394 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:15.394 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:15.394 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:15.394 10:52:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:15.654 10:52:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:16.594 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:16.594 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:16.594 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.594 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:16.853 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:16.853 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:16.854 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.854 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.113 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:17.389 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.389 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:17.389 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.389 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:17.389 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.389 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:17.651 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.651 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:17.651 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.651 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:17.651 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:17.910 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:18.171 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.109 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:19.370 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:19.370 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:19.370 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.370 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:19.630 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.630 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:19.630 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.630 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:19.630 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.631 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:19.631 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.631 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:19.891 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.891 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:19.891 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.891 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2203147 ']' 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2203147' 00:36:20.153 killing process with pid 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2203147 00:36:20.153 Connection closed with partial response: 00:36:20.153 00:36:20.153 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2203147 00:36:20.153 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:20.153 [2024-07-22 10:51:58.659633] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:36:20.153 [2024-07-22 10:51:58.659689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203147 ] 00:36:20.153 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.153 [2024-07-22 10:51:58.714381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.153 [2024-07-22 10:51:58.742557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.153 Running I/O for 90 seconds... 00:36:20.153 [2024-07-22 10:52:11.452868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.452902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:20.153 [2024-07-22 10:52:11.453221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.153 [2024-07-22 10:52:11.453226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.154 [2024-07-22 10:52:11.453960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:20.154 [2024-07-22 10:52:11.453973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.453978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.453991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.453996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.454229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.155 [2024-07-22 10:52:11.455230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.155 [2024-07-22 10:52:11.455553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:20.155 [2024-07-22 10:52:11.455570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:11.455575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:11.455592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:11.455597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:11.455615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:11.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.156 [2024-07-22 10:52:23.575985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.575995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.576000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.576010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.576015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.576025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:20.156 [2024-07-22 10:52:23.576096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.156 [2024-07-22 10:52:23.576103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:20.156 Received shutdown signal, test time was about 25.691721 seconds 00:36:20.156 00:36:20.156 Latency(us) 00:36:20.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.156 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:20.156 Verification LBA range: start 0x0 length 0x4000 00:36:20.156 Nvme0n1 : 25.69 10925.76 42.68 0.00 0.00 11697.25 202.24 3019898.88 00:36:20.156 =================================================================================================================== 00:36:20.156 Total : 10925.76 42.68 0.00 0.00 11697.25 202.24 3019898.88 00:36:20.156 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:20.418 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:20.418 rmmod nvme_tcp 00:36:20.418 rmmod nvme_fabrics 00:36:20.418 rmmod nvme_keyring 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2202793 ']' 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2202793 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2202793 ']' 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2202793 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202793 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202793' 00:36:20.418 killing process with pid 2202793 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2202793 00:36:20.418 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2202793 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:20.677 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.214 10:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:23.214 00:36:23.214 real 0m39.545s 00:36:23.214 user 1m39.832s 00:36:23.214 sys 0m11.337s 00:36:23.214 10:52:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:23.214 10:52:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.214 ************************************ 00:36:23.214 END TEST nvmf_host_multipath_status 00:36:23.214 ************************************ 00:36:23.214 10:52:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:23.214 10:52:28 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:23.214 10:52:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:23.214 10:52:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:23.214 10:52:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:23.215 ************************************ 00:36:23.215 START TEST nvmf_discovery_remove_ifc 00:36:23.215 ************************************ 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:23.215 * Looking for test storage... 00:36:23.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:36:23.215 10:52:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:31.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.338 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:31.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:31.339 Found net devices under 0000:31:00.0: cvl_0_0 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:31.339 Found net devices under 0000:31:00.1: cvl_0_1 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:31.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.782 ms 00:36:31.339 00:36:31.339 --- 10.0.0.2 ping statistics --- 00:36:31.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.339 rtt min/avg/max/mdev = 0.782/0.782/0.782/0.000 ms 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:36:31.339 00:36:31.339 --- 10.0.0.1 ping statistics --- 00:36:31.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.339 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2213109 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2213109 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2213109 ']' 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.339 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.339 [2024-07-22 10:52:36.610119] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:36:31.339 [2024-07-22 10:52:36.610186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.339 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.339 [2024-07-22 10:52:36.706074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.339 [2024-07-22 10:52:36.742268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.339 [2024-07-22 10:52:36.742310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.339 [2024-07-22 10:52:36.742317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.339 [2024-07-22 10:52:36.742323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.339 [2024-07-22 10:52:36.742329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.339 [2024-07-22 10:52:36.742348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.908 [2024-07-22 10:52:37.423056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.908 [2024-07-22 10:52:37.431195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:31.908 null0 00:36:31.908 [2024-07-22 10:52:37.463209] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2213394 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2213394 /tmp/host.sock 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2213394 ']' 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:31.908 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.908 10:52:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.908 [2024-07-22 10:52:37.543084] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:36:31.908 [2024-07-22 10:52:37.543133] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213394 ] 00:36:31.908 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.908 [2024-07-22 10:52:37.606332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.167 [2024-07-22 10:52:37.637509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.738 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:34.118 [2024-07-22 10:52:39.402060] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:34.118 [2024-07-22 10:52:39.402082] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:34.118 [2024-07-22 10:52:39.402095] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:34.118 [2024-07-22 10:52:39.492390] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:34.118 [2024-07-22 10:52:39.552709] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:34.118 [2024-07-22 10:52:39.552756] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:34.118 [2024-07-22 10:52:39.552778] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:34.118 [2024-07-22 10:52:39.552792] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:34.118 [2024-07-22 10:52:39.552812] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.118 [2024-07-22 10:52:39.560615] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x25082b0 was disconnected and fre 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:34.118 ed. delete nvme_qpair. 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:34.118 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:34.119 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:35.497 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:36.434 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:37.374 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:37.374 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:37.375 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:38.316 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:38.316 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.575 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:38.575 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:39.513 [2024-07-22 10:52:44.993566] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:39.513 [2024-07-22 10:52:44.993607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:39.513 [2024-07-22 10:52:44.993619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.513 [2024-07-22 10:52:44.993629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:39.513 [2024-07-22 10:52:44.993637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.513 [2024-07-22 10:52:44.993646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:39.513 [2024-07-22 10:52:44.993653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.513 [2024-07-22 10:52:44.993661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:39.513 [2024-07-22 10:52:44.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.513 [2024-07-22 10:52:44.993676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:39.513 [2024-07-22 10:52:44.993683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.513 [2024-07-22 10:52:44.993691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ceb20 is same with the state(5) to be set 00:36:39.513 [2024-07-22 10:52:45.003585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ceb20 (9): Bad file descriptor 00:36:39.513 [2024-07-22 10:52:45.013624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:39.513 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:40.457 [2024-07-22 10:52:46.017422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:40.457 [2024-07-22 10:52:46.017460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ceb20 with addr=10.0.0.2, port=4420 00:36:40.457 [2024-07-22 10:52:46.017476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ceb20 is same with the state(5) to be set 00:36:40.457 [2024-07-22 10:52:46.017498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ceb20 (9): Bad file descriptor 00:36:40.457 [2024-07-22 10:52:46.017861] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:40.457 [2024-07-22 10:52:46.017879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:40.457 [2024-07-22 10:52:46.017886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:40.457 [2024-07-22 10:52:46.017894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:40.457 [2024-07-22 10:52:46.017910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:40.457 [2024-07-22 10:52:46.017918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:40.457 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.457 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:40.457 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:41.397 [2024-07-22 10:52:47.020291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:41.397 [2024-07-22 10:52:47.020312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:41.397 [2024-07-22 10:52:47.020320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:41.397 [2024-07-22 10:52:47.020327] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:36:41.397 [2024-07-22 10:52:47.020340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:41.397 [2024-07-22 10:52:47.020359] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:41.397 [2024-07-22 10:52:47.020378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.397 [2024-07-22 10:52:47.020387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.397 [2024-07-22 10:52:47.020402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.397 [2024-07-22 10:52:47.020410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.397 [2024-07-22 10:52:47.020418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.397 [2024-07-22 10:52:47.020425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.397 [2024-07-22 10:52:47.020433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.397 [2024-07-22 10:52:47.020441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.397 [2024-07-22 10:52:47.020449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.397 [2024-07-22 10:52:47.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.397 [2024-07-22 10:52:47.020463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:41.397 [2024-07-22 10:52:47.020939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdfd0 (9): Bad file descriptor 00:36:41.397 [2024-07-22 10:52:47.021952] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:41.397 [2024-07-22 10:52:47.021963] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:41.397 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:41.657 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:42.595 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.854 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:42.854 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:43.421 [2024-07-22 10:52:49.035857] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:43.421 [2024-07-22 10:52:49.035874] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:43.421 [2024-07-22 10:52:49.035886] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:43.679 [2024-07-22 10:52:49.123165] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:43.679 [2024-07-22 10:52:49.307157] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:43.679 [2024-07-22 10:52:49.307197] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:43.679 [2024-07-22 10:52:49.307216] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:43.679 [2024-07-22 10:52:49.307230] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:43.679 [2024-07-22 10:52:49.307238] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:43.679 [2024-07-22 10:52:49.314364] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24ebd70 was disconnected and freed. delete nvme_qpair. 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2213394 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2213394 ']' 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2213394 00:36:43.679 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213394 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213394' 00:36:43.938 killing process with pid 2213394 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2213394 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2213394 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:43.938 rmmod nvme_tcp 00:36:43.938 rmmod nvme_fabrics 00:36:43.938 rmmod nvme_keyring 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2213109 ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2213109 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2213109 ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2213109 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:43.938 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213109 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213109' 00:36:44.197 killing process with pid 2213109 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2213109 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2213109 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:44.197 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.736 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:46.736 00:36:46.736 real 0m23.485s 00:36:46.736 user 0m27.064s 00:36:46.736 sys 0m7.159s 00:36:46.736 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:46.736 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:46.736 ************************************ 00:36:46.736 END TEST nvmf_discovery_remove_ifc 00:36:46.736 ************************************ 00:36:46.736 10:52:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:46.736 10:52:51 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:46.736 10:52:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:46.736 10:52:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:46.736 10:52:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:46.736 ************************************ 00:36:46.736 START TEST nvmf_identify_kernel_target 00:36:46.736 ************************************ 00:36:46.736 10:52:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:46.736 * Looking for test storage... 00:36:46.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:46.736 10:52:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:54.869 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:54.869 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:54.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:54.870 Found net devices under 0000:31:00.0: cvl_0_0 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:54.870 Found net devices under 0000:31:00.1: cvl_0_1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:54.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:54.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:36:54.870 00:36:54.870 --- 10.0.0.2 ping statistics --- 00:36:54.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.870 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:54.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:54.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:36:54.870 00:36:54.870 --- 10.0.0.1 ping statistics --- 00:36:54.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.870 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:54.870 10:52:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:58.253 Waiting for block devices as requested 00:36:58.253 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:58.253 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:58.253 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:58.514 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:58.514 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:58.514 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:58.774 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:58.774 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:58.774 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:59.034 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:59.034 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:59.034 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:59.295 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:59.295 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:59.296 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:59.296 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:59.557 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:59.557 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:59.558 No valid GPT data, bailing 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:59.558 00:36:59.558 Discovery Log Number of Records 2, Generation counter 2 00:36:59.558 =====Discovery Log Entry 0====== 00:36:59.558 trtype: tcp 00:36:59.558 adrfam: ipv4 00:36:59.558 subtype: current discovery subsystem 00:36:59.558 treq: not specified, sq flow control disable supported 00:36:59.558 portid: 1 00:36:59.558 trsvcid: 4420 00:36:59.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:59.558 traddr: 10.0.0.1 00:36:59.558 eflags: none 00:36:59.558 sectype: none 00:36:59.558 =====Discovery Log Entry 1====== 00:36:59.558 trtype: tcp 00:36:59.558 adrfam: ipv4 00:36:59.558 subtype: nvme subsystem 00:36:59.558 treq: not specified, sq flow control disable supported 00:36:59.558 portid: 1 00:36:59.558 trsvcid: 4420 00:36:59.558 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:59.558 traddr: 10.0.0.1 00:36:59.558 eflags: none 00:36:59.558 sectype: none 00:36:59.558 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:59.558 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:59.558 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.558 ===================================================== 00:36:59.558 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:59.558 ===================================================== 00:36:59.558 Controller Capabilities/Features 00:36:59.558 ================================ 00:36:59.558 Vendor ID: 0000 00:36:59.558 Subsystem Vendor ID: 0000 00:36:59.558 Serial Number: cd3bba232a497fc28350 00:36:59.558 Model Number: Linux 00:36:59.558 Firmware Version: 6.7.0-68 00:36:59.558 Recommended Arb Burst: 0 00:36:59.558 IEEE OUI Identifier: 00 00 00 00:36:59.558 Multi-path I/O 00:36:59.558 May have multiple subsystem ports: No 00:36:59.558 May have multiple controllers: No 00:36:59.558 Associated with SR-IOV VF: No 00:36:59.558 Max Data Transfer Size: Unlimited 00:36:59.558 Max Number of Namespaces: 0 00:36:59.558 Max Number of I/O Queues: 1024 00:36:59.558 NVMe Specification Version (VS): 1.3 00:36:59.558 NVMe Specification Version (Identify): 1.3 00:36:59.558 Maximum Queue Entries: 1024 00:36:59.558 Contiguous Queues Required: No 00:36:59.558 Arbitration Mechanisms Supported 00:36:59.558 Weighted Round Robin: Not Supported 00:36:59.558 Vendor Specific: Not Supported 00:36:59.558 Reset Timeout: 7500 ms 00:36:59.558 Doorbell Stride: 4 bytes 00:36:59.558 NVM Subsystem Reset: Not Supported 00:36:59.558 Command Sets Supported 00:36:59.558 NVM Command Set: Supported 00:36:59.558 Boot Partition: Not Supported 00:36:59.558 Memory Page Size Minimum: 4096 bytes 00:36:59.558 Memory Page Size Maximum: 4096 bytes 00:36:59.558 Persistent Memory Region: Not Supported 00:36:59.558 Optional Asynchronous Events Supported 00:36:59.558 Namespace Attribute Notices: Not Supported 00:36:59.558 Firmware Activation Notices: Not Supported 00:36:59.558 ANA Change Notices: Not Supported 00:36:59.558 PLE Aggregate Log Change Notices: Not Supported 00:36:59.558 LBA Status Info Alert Notices: Not Supported 00:36:59.558 EGE Aggregate Log Change Notices: Not Supported 00:36:59.558 Normal NVM Subsystem Shutdown event: Not Supported 00:36:59.558 Zone Descriptor Change Notices: Not Supported 00:36:59.558 Discovery Log Change Notices: Supported 00:36:59.558 Controller Attributes 00:36:59.558 128-bit Host Identifier: Not Supported 00:36:59.558 Non-Operational Permissive Mode: Not Supported 00:36:59.558 NVM Sets: Not Supported 00:36:59.558 Read Recovery Levels: Not Supported 00:36:59.558 Endurance Groups: Not Supported 00:36:59.558 Predictable Latency Mode: Not Supported 00:36:59.558 Traffic Based Keep ALive: Not Supported 00:36:59.558 Namespace Granularity: Not Supported 00:36:59.558 SQ Associations: Not Supported 00:36:59.558 UUID List: Not Supported 00:36:59.558 Multi-Domain Subsystem: Not Supported 00:36:59.558 Fixed Capacity Management: Not Supported 00:36:59.558 Variable Capacity Management: Not Supported 00:36:59.558 Delete Endurance Group: Not Supported 00:36:59.558 Delete NVM Set: Not Supported 00:36:59.558 Extended LBA Formats Supported: Not Supported 00:36:59.558 Flexible Data Placement Supported: Not Supported 00:36:59.558 00:36:59.558 Controller Memory Buffer Support 00:36:59.558 ================================ 00:36:59.558 Supported: No 00:36:59.558 00:36:59.558 Persistent Memory Region Support 00:36:59.558 ================================ 00:36:59.558 Supported: No 00:36:59.558 00:36:59.558 Admin Command Set Attributes 00:36:59.558 ============================ 00:36:59.558 Security Send/Receive: Not Supported 00:36:59.558 Format NVM: Not Supported 00:36:59.558 Firmware Activate/Download: Not Supported 00:36:59.558 Namespace Management: Not Supported 00:36:59.558 Device Self-Test: Not Supported 00:36:59.558 Directives: Not Supported 00:36:59.558 NVMe-MI: Not Supported 00:36:59.558 Virtualization Management: Not Supported 00:36:59.558 Doorbell Buffer Config: Not Supported 00:36:59.558 Get LBA Status Capability: Not Supported 00:36:59.558 Command & Feature Lockdown Capability: Not Supported 00:36:59.558 Abort Command Limit: 1 00:36:59.558 Async Event Request Limit: 1 00:36:59.558 Number of Firmware Slots: N/A 00:36:59.558 Firmware Slot 1 Read-Only: N/A 00:36:59.558 Firmware Activation Without Reset: N/A 00:36:59.558 Multiple Update Detection Support: N/A 00:36:59.558 Firmware Update Granularity: No Information Provided 00:36:59.558 Per-Namespace SMART Log: No 00:36:59.558 Asymmetric Namespace Access Log Page: Not Supported 00:36:59.558 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:59.558 Command Effects Log Page: Not Supported 00:36:59.558 Get Log Page Extended Data: Supported 00:36:59.558 Telemetry Log Pages: Not Supported 00:36:59.558 Persistent Event Log Pages: Not Supported 00:36:59.558 Supported Log Pages Log Page: May Support 00:36:59.558 Commands Supported & Effects Log Page: Not Supported 00:36:59.558 Feature Identifiers & Effects Log Page:May Support 00:36:59.558 NVMe-MI Commands & Effects Log Page: May Support 00:36:59.558 Data Area 4 for Telemetry Log: Not Supported 00:36:59.558 Error Log Page Entries Supported: 1 00:36:59.558 Keep Alive: Not Supported 00:36:59.558 00:36:59.558 NVM Command Set Attributes 00:36:59.558 ========================== 00:36:59.558 Submission Queue Entry Size 00:36:59.558 Max: 1 00:36:59.558 Min: 1 00:36:59.558 Completion Queue Entry Size 00:36:59.558 Max: 1 00:36:59.558 Min: 1 00:36:59.558 Number of Namespaces: 0 00:36:59.558 Compare Command: Not Supported 00:36:59.558 Write Uncorrectable Command: Not Supported 00:36:59.558 Dataset Management Command: Not Supported 00:36:59.558 Write Zeroes Command: Not Supported 00:36:59.558 Set Features Save Field: Not Supported 00:36:59.558 Reservations: Not Supported 00:36:59.558 Timestamp: Not Supported 00:36:59.558 Copy: Not Supported 00:36:59.558 Volatile Write Cache: Not Present 00:36:59.558 Atomic Write Unit (Normal): 1 00:36:59.558 Atomic Write Unit (PFail): 1 00:36:59.558 Atomic Compare & Write Unit: 1 00:36:59.558 Fused Compare & Write: Not Supported 00:36:59.558 Scatter-Gather List 00:36:59.558 SGL Command Set: Supported 00:36:59.558 SGL Keyed: Not Supported 00:36:59.559 SGL Bit Bucket Descriptor: Not Supported 00:36:59.559 SGL Metadata Pointer: Not Supported 00:36:59.559 Oversized SGL: Not Supported 00:36:59.559 SGL Metadata Address: Not Supported 00:36:59.559 SGL Offset: Supported 00:36:59.559 Transport SGL Data Block: Not Supported 00:36:59.559 Replay Protected Memory Block: Not Supported 00:36:59.559 00:36:59.559 Firmware Slot Information 00:36:59.559 ========================= 00:36:59.559 Active slot: 0 00:36:59.559 00:36:59.559 00:36:59.559 Error Log 00:36:59.559 ========= 00:36:59.559 00:36:59.559 Active Namespaces 00:36:59.559 ================= 00:36:59.559 Discovery Log Page 00:36:59.559 ================== 00:36:59.559 Generation Counter: 2 00:36:59.559 Number of Records: 2 00:36:59.559 Record Format: 0 00:36:59.559 00:36:59.559 Discovery Log Entry 0 00:36:59.559 ---------------------- 00:36:59.559 Transport Type: 3 (TCP) 00:36:59.559 Address Family: 1 (IPv4) 00:36:59.559 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:59.559 Entry Flags: 00:36:59.559 Duplicate Returned Information: 0 00:36:59.559 Explicit Persistent Connection Support for Discovery: 0 00:36:59.559 Transport Requirements: 00:36:59.559 Secure Channel: Not Specified 00:36:59.559 Port ID: 1 (0x0001) 00:36:59.559 Controller ID: 65535 (0xffff) 00:36:59.559 Admin Max SQ Size: 32 00:36:59.559 Transport Service Identifier: 4420 00:36:59.559 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:59.559 Transport Address: 10.0.0.1 00:36:59.559 Discovery Log Entry 1 00:36:59.559 ---------------------- 00:36:59.559 Transport Type: 3 (TCP) 00:36:59.559 Address Family: 1 (IPv4) 00:36:59.559 Subsystem Type: 2 (NVM Subsystem) 00:36:59.559 Entry Flags: 00:36:59.559 Duplicate Returned Information: 0 00:36:59.559 Explicit Persistent Connection Support for Discovery: 0 00:36:59.559 Transport Requirements: 00:36:59.559 Secure Channel: Not Specified 00:36:59.559 Port ID: 1 (0x0001) 00:36:59.559 Controller ID: 65535 (0xffff) 00:36:59.559 Admin Max SQ Size: 32 00:36:59.559 Transport Service Identifier: 4420 00:36:59.559 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:59.559 Transport Address: 10.0.0.1 00:36:59.559 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.559 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.821 get_feature(0x01) failed 00:36:59.821 get_feature(0x02) failed 00:36:59.821 get_feature(0x04) failed 00:36:59.821 ===================================================== 00:36:59.821 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:59.821 ===================================================== 00:36:59.821 Controller Capabilities/Features 00:36:59.821 ================================ 00:36:59.821 Vendor ID: 0000 00:36:59.821 Subsystem Vendor ID: 0000 00:36:59.821 Serial Number: 5994c0d3951e6bf525a2 00:36:59.821 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:59.821 Firmware Version: 6.7.0-68 00:36:59.821 Recommended Arb Burst: 6 00:36:59.821 IEEE OUI Identifier: 00 00 00 00:36:59.821 Multi-path I/O 00:36:59.821 May have multiple subsystem ports: Yes 00:36:59.821 May have multiple controllers: Yes 00:36:59.821 Associated with SR-IOV VF: No 00:36:59.821 Max Data Transfer Size: Unlimited 00:36:59.821 Max Number of Namespaces: 1024 00:36:59.821 Max Number of I/O Queues: 128 00:36:59.821 NVMe Specification Version (VS): 1.3 00:36:59.821 NVMe Specification Version (Identify): 1.3 00:36:59.821 Maximum Queue Entries: 1024 00:36:59.821 Contiguous Queues Required: No 00:36:59.821 Arbitration Mechanisms Supported 00:36:59.821 Weighted Round Robin: Not Supported 00:36:59.821 Vendor Specific: Not Supported 00:36:59.821 Reset Timeout: 7500 ms 00:36:59.821 Doorbell Stride: 4 bytes 00:36:59.821 NVM Subsystem Reset: Not Supported 00:36:59.821 Command Sets Supported 00:36:59.822 NVM Command Set: Supported 00:36:59.822 Boot Partition: Not Supported 00:36:59.822 Memory Page Size Minimum: 4096 bytes 00:36:59.822 Memory Page Size Maximum: 4096 bytes 00:36:59.822 Persistent Memory Region: Not Supported 00:36:59.822 Optional Asynchronous Events Supported 00:36:59.822 Namespace Attribute Notices: Supported 00:36:59.822 Firmware Activation Notices: Not Supported 00:36:59.822 ANA Change Notices: Supported 00:36:59.822 PLE Aggregate Log Change Notices: Not Supported 00:36:59.822 LBA Status Info Alert Notices: Not Supported 00:36:59.822 EGE Aggregate Log Change Notices: Not Supported 00:36:59.822 Normal NVM Subsystem Shutdown event: Not Supported 00:36:59.822 Zone Descriptor Change Notices: Not Supported 00:36:59.822 Discovery Log Change Notices: Not Supported 00:36:59.822 Controller Attributes 00:36:59.822 128-bit Host Identifier: Supported 00:36:59.822 Non-Operational Permissive Mode: Not Supported 00:36:59.822 NVM Sets: Not Supported 00:36:59.822 Read Recovery Levels: Not Supported 00:36:59.822 Endurance Groups: Not Supported 00:36:59.822 Predictable Latency Mode: Not Supported 00:36:59.822 Traffic Based Keep ALive: Supported 00:36:59.822 Namespace Granularity: Not Supported 00:36:59.822 SQ Associations: Not Supported 00:36:59.822 UUID List: Not Supported 00:36:59.822 Multi-Domain Subsystem: Not Supported 00:36:59.822 Fixed Capacity Management: Not Supported 00:36:59.822 Variable Capacity Management: Not Supported 00:36:59.822 Delete Endurance Group: Not Supported 00:36:59.822 Delete NVM Set: Not Supported 00:36:59.822 Extended LBA Formats Supported: Not Supported 00:36:59.822 Flexible Data Placement Supported: Not Supported 00:36:59.822 00:36:59.822 Controller Memory Buffer Support 00:36:59.822 ================================ 00:36:59.822 Supported: No 00:36:59.822 00:36:59.822 Persistent Memory Region Support 00:36:59.822 ================================ 00:36:59.822 Supported: No 00:36:59.822 00:36:59.822 Admin Command Set Attributes 00:36:59.822 ============================ 00:36:59.822 Security Send/Receive: Not Supported 00:36:59.822 Format NVM: Not Supported 00:36:59.822 Firmware Activate/Download: Not Supported 00:36:59.822 Namespace Management: Not Supported 00:36:59.822 Device Self-Test: Not Supported 00:36:59.822 Directives: Not Supported 00:36:59.822 NVMe-MI: Not Supported 00:36:59.822 Virtualization Management: Not Supported 00:36:59.822 Doorbell Buffer Config: Not Supported 00:36:59.822 Get LBA Status Capability: Not Supported 00:36:59.822 Command & Feature Lockdown Capability: Not Supported 00:36:59.822 Abort Command Limit: 4 00:36:59.822 Async Event Request Limit: 4 00:36:59.822 Number of Firmware Slots: N/A 00:36:59.822 Firmware Slot 1 Read-Only: N/A 00:36:59.822 Firmware Activation Without Reset: N/A 00:36:59.822 Multiple Update Detection Support: N/A 00:36:59.822 Firmware Update Granularity: No Information Provided 00:36:59.822 Per-Namespace SMART Log: Yes 00:36:59.822 Asymmetric Namespace Access Log Page: Supported 00:36:59.822 ANA Transition Time : 10 sec 00:36:59.822 00:36:59.822 Asymmetric Namespace Access Capabilities 00:36:59.822 ANA Optimized State : Supported 00:36:59.822 ANA Non-Optimized State : Supported 00:36:59.822 ANA Inaccessible State : Supported 00:36:59.822 ANA Persistent Loss State : Supported 00:36:59.822 ANA Change State : Supported 00:36:59.822 ANAGRPID is not changed : No 00:36:59.822 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:59.822 00:36:59.822 ANA Group Identifier Maximum : 128 00:36:59.822 Number of ANA Group Identifiers : 128 00:36:59.822 Max Number of Allowed Namespaces : 1024 00:36:59.822 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:59.822 Command Effects Log Page: Supported 00:36:59.822 Get Log Page Extended Data: Supported 00:36:59.822 Telemetry Log Pages: Not Supported 00:36:59.822 Persistent Event Log Pages: Not Supported 00:36:59.822 Supported Log Pages Log Page: May Support 00:36:59.822 Commands Supported & Effects Log Page: Not Supported 00:36:59.822 Feature Identifiers & Effects Log Page:May Support 00:36:59.822 NVMe-MI Commands & Effects Log Page: May Support 00:36:59.822 Data Area 4 for Telemetry Log: Not Supported 00:36:59.822 Error Log Page Entries Supported: 128 00:36:59.822 Keep Alive: Supported 00:36:59.822 Keep Alive Granularity: 1000 ms 00:36:59.822 00:36:59.822 NVM Command Set Attributes 00:36:59.822 ========================== 00:36:59.822 Submission Queue Entry Size 00:36:59.822 Max: 64 00:36:59.822 Min: 64 00:36:59.822 Completion Queue Entry Size 00:36:59.822 Max: 16 00:36:59.822 Min: 16 00:36:59.822 Number of Namespaces: 1024 00:36:59.822 Compare Command: Not Supported 00:36:59.822 Write Uncorrectable Command: Not Supported 00:36:59.822 Dataset Management Command: Supported 00:36:59.822 Write Zeroes Command: Supported 00:36:59.822 Set Features Save Field: Not Supported 00:36:59.822 Reservations: Not Supported 00:36:59.822 Timestamp: Not Supported 00:36:59.822 Copy: Not Supported 00:36:59.822 Volatile Write Cache: Present 00:36:59.822 Atomic Write Unit (Normal): 1 00:36:59.822 Atomic Write Unit (PFail): 1 00:36:59.822 Atomic Compare & Write Unit: 1 00:36:59.822 Fused Compare & Write: Not Supported 00:36:59.822 Scatter-Gather List 00:36:59.822 SGL Command Set: Supported 00:36:59.822 SGL Keyed: Not Supported 00:36:59.822 SGL Bit Bucket Descriptor: Not Supported 00:36:59.822 SGL Metadata Pointer: Not Supported 00:36:59.822 Oversized SGL: Not Supported 00:36:59.822 SGL Metadata Address: Not Supported 00:36:59.822 SGL Offset: Supported 00:36:59.822 Transport SGL Data Block: Not Supported 00:36:59.822 Replay Protected Memory Block: Not Supported 00:36:59.822 00:36:59.822 Firmware Slot Information 00:36:59.822 ========================= 00:36:59.822 Active slot: 0 00:36:59.822 00:36:59.822 Asymmetric Namespace Access 00:36:59.822 =========================== 00:36:59.822 Change Count : 0 00:36:59.822 Number of ANA Group Descriptors : 1 00:36:59.822 ANA Group Descriptor : 0 00:36:59.822 ANA Group ID : 1 00:36:59.822 Number of NSID Values : 1 00:36:59.822 Change Count : 0 00:36:59.822 ANA State : 1 00:36:59.822 Namespace Identifier : 1 00:36:59.822 00:36:59.822 Commands Supported and Effects 00:36:59.822 ============================== 00:36:59.822 Admin Commands 00:36:59.822 -------------- 00:36:59.822 Get Log Page (02h): Supported 00:36:59.822 Identify (06h): Supported 00:36:59.822 Abort (08h): Supported 00:36:59.822 Set Features (09h): Supported 00:36:59.822 Get Features (0Ah): Supported 00:36:59.822 Asynchronous Event Request (0Ch): Supported 00:36:59.822 Keep Alive (18h): Supported 00:36:59.822 I/O Commands 00:36:59.822 ------------ 00:36:59.822 Flush (00h): Supported 00:36:59.822 Write (01h): Supported LBA-Change 00:36:59.822 Read (02h): Supported 00:36:59.822 Write Zeroes (08h): Supported LBA-Change 00:36:59.822 Dataset Management (09h): Supported 00:36:59.822 00:36:59.822 Error Log 00:36:59.822 ========= 00:36:59.822 Entry: 0 00:36:59.822 Error Count: 0x3 00:36:59.822 Submission Queue Id: 0x0 00:36:59.822 Command Id: 0x5 00:36:59.822 Phase Bit: 0 00:36:59.822 Status Code: 0x2 00:36:59.822 Status Code Type: 0x0 00:36:59.822 Do Not Retry: 1 00:36:59.822 Error Location: 0x28 00:36:59.822 LBA: 0x0 00:36:59.822 Namespace: 0x0 00:36:59.822 Vendor Log Page: 0x0 00:36:59.822 ----------- 00:36:59.822 Entry: 1 00:36:59.822 Error Count: 0x2 00:36:59.822 Submission Queue Id: 0x0 00:36:59.822 Command Id: 0x5 00:36:59.822 Phase Bit: 0 00:36:59.822 Status Code: 0x2 00:36:59.822 Status Code Type: 0x0 00:36:59.822 Do Not Retry: 1 00:36:59.822 Error Location: 0x28 00:36:59.822 LBA: 0x0 00:36:59.822 Namespace: 0x0 00:36:59.822 Vendor Log Page: 0x0 00:36:59.822 ----------- 00:36:59.822 Entry: 2 00:36:59.822 Error Count: 0x1 00:36:59.822 Submission Queue Id: 0x0 00:36:59.822 Command Id: 0x4 00:36:59.822 Phase Bit: 0 00:36:59.822 Status Code: 0x2 00:36:59.822 Status Code Type: 0x0 00:36:59.822 Do Not Retry: 1 00:36:59.822 Error Location: 0x28 00:36:59.822 LBA: 0x0 00:36:59.822 Namespace: 0x0 00:36:59.822 Vendor Log Page: 0x0 00:36:59.822 00:36:59.822 Number of Queues 00:36:59.822 ================ 00:36:59.822 Number of I/O Submission Queues: 128 00:36:59.822 Number of I/O Completion Queues: 128 00:36:59.822 00:36:59.822 ZNS Specific Controller Data 00:36:59.822 ============================ 00:36:59.822 Zone Append Size Limit: 0 00:36:59.822 00:36:59.822 00:36:59.822 Active Namespaces 00:36:59.822 ================= 00:36:59.822 get_feature(0x05) failed 00:36:59.822 Namespace ID:1 00:36:59.822 Command Set Identifier: NVM (00h) 00:36:59.822 Deallocate: Supported 00:36:59.822 Deallocated/Unwritten Error: Not Supported 00:36:59.822 Deallocated Read Value: Unknown 00:36:59.822 Deallocate in Write Zeroes: Not Supported 00:36:59.823 Deallocated Guard Field: 0xFFFF 00:36:59.823 Flush: Supported 00:36:59.823 Reservation: Not Supported 00:36:59.823 Namespace Sharing Capabilities: Multiple Controllers 00:36:59.823 Size (in LBAs): 3750748848 (1788GiB) 00:36:59.823 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:59.823 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:59.823 UUID: 2ce8d23c-d14b-4aaf-814b-611933c12649 00:36:59.823 Thin Provisioning: Not Supported 00:36:59.823 Per-NS Atomic Units: Yes 00:36:59.823 Atomic Write Unit (Normal): 8 00:36:59.823 Atomic Write Unit (PFail): 8 00:36:59.823 Preferred Write Granularity: 8 00:36:59.823 Atomic Compare & Write Unit: 8 00:36:59.823 Atomic Boundary Size (Normal): 0 00:36:59.823 Atomic Boundary Size (PFail): 0 00:36:59.823 Atomic Boundary Offset: 0 00:36:59.823 NGUID/EUI64 Never Reused: No 00:36:59.823 ANA group ID: 1 00:36:59.823 Namespace Write Protected: No 00:36:59.823 Number of LBA Formats: 1 00:36:59.823 Current LBA Format: LBA Format #00 00:36:59.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:59.823 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:59.823 rmmod nvme_tcp 00:36:59.823 rmmod nvme_fabrics 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:59.823 10:53:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.739 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.999 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:01.999 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.999 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:01.999 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:01.999 10:53:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:06.203 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:06.203 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:06.203 00:37:06.203 real 0m19.617s 00:37:06.203 user 0m5.377s 00:37:06.203 sys 0m11.326s 00:37:06.203 10:53:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:06.203 10:53:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.203 ************************************ 00:37:06.203 END TEST nvmf_identify_kernel_target 00:37:06.203 ************************************ 00:37:06.203 10:53:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:06.203 10:53:11 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:06.203 10:53:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:06.203 10:53:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:06.203 10:53:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:06.203 ************************************ 00:37:06.203 START TEST nvmf_auth_host 00:37:06.203 ************************************ 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:06.203 * Looking for test storage... 00:37:06.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:37:06.203 10:53:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:37:06.204 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:14.344 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:14.344 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:14.344 Found net devices under 0000:31:00.0: cvl_0_0 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:14.344 Found net devices under 0000:31:00.1: cvl_0_1 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:14.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:37:14.344 00:37:14.344 --- 10.0.0.2 ping statistics --- 00:37:14.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.344 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:37:14.344 00:37:14.344 --- 10.0.0.1 ping statistics --- 00:37:14.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.344 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2228533 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2228533 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2228533 ']' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:14.344 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f2f4f8f5a663f3f2a93c7c061bbd23b3 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ubD 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f2f4f8f5a663f3f2a93c7c061bbd23b3 0 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f2f4f8f5a663f3f2a93c7c061bbd23b3 0 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f2f4f8f5a663f3f2a93c7c061bbd23b3 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ubD 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ubD 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ubD 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=427a0f2d15668e95b9cd1edae49b36db89e2ed5e4ca9a5c2e68f8f8765d0a465 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Ld9 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 427a0f2d15668e95b9cd1edae49b36db89e2ed5e4ca9a5c2e68f8f8765d0a465 3 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 427a0f2d15668e95b9cd1edae49b36db89e2ed5e4ca9a5c2e68f8f8765d0a465 3 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=427a0f2d15668e95b9cd1edae49b36db89e2ed5e4ca9a5c2e68f8f8765d0a465 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Ld9 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Ld9 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ld9 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:15.282 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=770a70bd8692aa1fca8b5b09d665130fafeab56807ed5ff5 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9Jl 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 770a70bd8692aa1fca8b5b09d665130fafeab56807ed5ff5 0 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 770a70bd8692aa1fca8b5b09d665130fafeab56807ed5ff5 0 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=770a70bd8692aa1fca8b5b09d665130fafeab56807ed5ff5 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:15.542 10:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9Jl 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9Jl 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9Jl 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aeda17ea60d0e14d7945f2e17cf12f57c7bb5f9d616e9f9b 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Gk 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aeda17ea60d0e14d7945f2e17cf12f57c7bb5f9d616e9f9b 2 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aeda17ea60d0e14d7945f2e17cf12f57c7bb5f9d616e9f9b 2 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aeda17ea60d0e14d7945f2e17cf12f57c7bb5f9d616e9f9b 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Gk 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Gk 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Gk 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.542 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b1b8d59bc150d686fd8379f5cbce039 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SDr 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b1b8d59bc150d686fd8379f5cbce039 1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b1b8d59bc150d686fd8379f5cbce039 1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b1b8d59bc150d686fd8379f5cbce039 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SDr 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SDr 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SDr 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1299df86a8826181cd58013430dc9c54 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BFu 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1299df86a8826181cd58013430dc9c54 1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1299df86a8826181cd58013430dc9c54 1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1299df86a8826181cd58013430dc9c54 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BFu 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BFu 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BFu 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f07823f95ff815e00bc6b195096728da713f887820b9ed57 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yqk 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f07823f95ff815e00bc6b195096728da713f887820b9ed57 2 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f07823f95ff815e00bc6b195096728da713f887820b9ed57 2 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f07823f95ff815e00bc6b195096728da713f887820b9ed57 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:15.543 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yqk 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yqk 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yqk 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f8d47d7877edd3135187a1395cea44d 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yA8 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f8d47d7877edd3135187a1395cea44d 0 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f8d47d7877edd3135187a1395cea44d 0 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f8d47d7877edd3135187a1395cea44d 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yA8 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yA8 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yA8 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aefed0d736ca963f2990041ff73444189b67b60246784ac5704f6e985e59a2cc 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bhW 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aefed0d736ca963f2990041ff73444189b67b60246784ac5704f6e985e59a2cc 3 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aefed0d736ca963f2990041ff73444189b67b60246784ac5704f6e985e59a2cc 3 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aefed0d736ca963f2990041ff73444189b67b60246784ac5704f6e985e59a2cc 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bhW 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bhW 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bhW 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2228533 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2228533 ']' 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.803 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ubD 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ld9 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ld9 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9Jl 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Gk ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Gk 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SDr 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BFu ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BFu 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yqk 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yA8 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yA8 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bhW 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:16.064 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:19.357 Waiting for block devices as requested 00:37:19.616 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:19.616 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:19.616 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.876 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.876 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.876 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.137 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.137 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:20.137 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:20.395 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.395 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.395 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:20.655 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:20.655 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:20.655 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.655 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.915 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:21.486 No valid GPT data, bailing 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:21.486 00:37:21.486 Discovery Log Number of Records 2, Generation counter 2 00:37:21.486 =====Discovery Log Entry 0====== 00:37:21.486 trtype: tcp 00:37:21.486 adrfam: ipv4 00:37:21.486 subtype: current discovery subsystem 00:37:21.486 treq: not specified, sq flow control disable supported 00:37:21.486 portid: 1 00:37:21.486 trsvcid: 4420 00:37:21.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:21.486 traddr: 10.0.0.1 00:37:21.486 eflags: none 00:37:21.486 sectype: none 00:37:21.486 =====Discovery Log Entry 1====== 00:37:21.486 trtype: tcp 00:37:21.486 adrfam: ipv4 00:37:21.486 subtype: nvme subsystem 00:37:21.486 treq: not specified, sq flow control disable supported 00:37:21.486 portid: 1 00:37:21.486 trsvcid: 4420 00:37:21.486 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:21.486 traddr: 10.0.0.1 00:37:21.486 eflags: none 00:37:21.486 sectype: none 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:21.486 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.487 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.748 nvme0n1 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:21.748 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:21.749 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:21.749 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.749 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.009 nvme0n1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:22.009 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.010 nvme0n1 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.010 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.270 nvme0n1 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.270 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.530 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.530 nvme0n1 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.531 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.790 nvme0n1 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:22.790 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.791 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.050 nvme0n1 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:23.050 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.051 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.310 nvme0n1 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.310 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.569 nvme0n1 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.569 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.570 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.830 nvme0n1 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.830 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.089 nvme0n1 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.089 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.090 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.349 nvme0n1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.349 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.608 nvme0n1 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.608 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:24.867 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.868 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.128 nvme0n1 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.128 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 nvme0n1 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.388 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:25.388 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:25.389 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:25.389 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:25.389 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.389 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.649 nvme0n1 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.649 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.220 nvme0n1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.220 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.789 nvme0n1 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.789 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.360 nvme0n1 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:27.360 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.361 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.930 nvme0n1 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.930 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.189 nvme0n1 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.189 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.448 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.017 nvme0n1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.017 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.959 nvme0n1 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.959 10:53:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.902 nvme0n1 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.902 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.472 nvme0n1 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.472 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.473 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 nvme0n1 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 nvme0n1 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:32.434 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.435 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.741 nvme0n1 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.741 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 nvme0n1 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.008 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.009 nvme0n1 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.009 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.270 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 nvme0n1 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.271 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.533 nvme0n1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.533 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.795 nvme0n1 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.795 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.056 nvme0n1 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.056 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.318 nvme0n1 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.318 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.581 nvme0n1 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.581 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.842 nvme0n1 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.842 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.102 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.360 nvme0n1 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.360 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.361 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.621 nvme0n1 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.621 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.882 nvme0n1 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.882 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.883 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.143 nvme0n1 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.143 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.402 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.402 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:36.402 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.402 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.403 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 nvme0n1 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.662 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.922 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 nvme0n1 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.801 nvme0n1 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.801 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.371 nvme0n1 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.371 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.941 nvme0n1 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.941 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.513 nvme0n1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.773 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.344 nvme0n1 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.344 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:40.604 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.605 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.177 nvme0n1 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.177 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.438 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.012 nvme0n1 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.012 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.274 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.845 nvme0n1 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.845 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.106 nvme0n1 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.106 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.107 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.368 nvme0n1 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.368 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.628 nvme0n1 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.628 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.629 nvme0n1 00:37:43.629 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 nvme0n1 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.890 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.150 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.150 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.150 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.150 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.150 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.151 nvme0n1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.151 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.411 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.411 nvme0n1 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.411 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.673 nvme0n1 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.673 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.934 nvme0n1 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.934 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.195 nvme0n1 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:45.195 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.196 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.456 nvme0n1 00:37:45.456 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.456 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.456 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.456 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.456 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.716 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.020 nvme0n1 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.020 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.281 nvme0n1 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.281 10:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.541 nvme0n1 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:46.541 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:46.542 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:46.542 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:46.542 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.801 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.060 nvme0n1 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.060 10:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.630 nvme0n1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.630 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.890 nvme0n1 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.890 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.151 10:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.412 nvme0n1 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.412 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.672 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.932 nvme0n1 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.932 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.191 10:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.450 nvme0n1 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.450 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmNGY4ZjVhNjYzZjNmMmE5M2M3YzA2MWJiZDIzYjO1d+Fn: 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI3YTBmMmQxNTY2OGU5NWI5Y2QxZWRhZTQ5YjM2ZGI4OWUyZWQ1ZTRjYTlhNWMyZTY4ZjhmODc2NWQwYTQ2Narqa48=: 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.710 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.279 nvme0n1 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:50.279 10:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:50.539 10:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:50.539 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.539 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.109 nvme0n1 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxYjhkNTliYzE1MGQ2ODZmZDgzNzlmNWNiY2UwMzk8eQIO: 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTI5OWRmODZhODgyNjE4MWNkNTgwMTM0MzBkYzljNTQ2R3GO: 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.109 10:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.051 nvme0n1 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3ODIzZjk1ZmY4MTVlMDBiYzZiMTk1MDk2NzI4ZGE3MTNmODg3ODIwYjllZDU3xsMkhQ==: 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWY4ZDQ3ZDc4NzdlZGQzMTM1MTg3YTEzOTVjZWE0NGQeD0No: 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.051 10:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.622 nvme0n1 00:37:52.622 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.622 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:52.622 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:52.622 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.622 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmZWQwZDczNmNhOTYzZjI5OTAwNDFmZjczNDQ0MTg5YjY3YjYwMjQ2Nzg0YWM1NzA0ZjZlOTg1ZTU5YTJjYxUuTy0=: 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.882 10:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.451 nvme0n1 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.451 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.710 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwYTcwYmQ4NjkyYWExZmNhOGI1YjA5ZDY2NTEzMGZhZmVhYjU2ODA3ZWQ1ZmY1COXRbw==: 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWVkYTE3ZWE2MGQwZTE0ZDc5NDVmMmUxN2NmMTJmNTdjN2JiNWY5ZDYxNmU5ZjlitK/bBA==: 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 request: 00:37:53.711 { 00:37:53.711 "name": "nvme0", 00:37:53.711 "trtype": "tcp", 00:37:53.711 "traddr": "10.0.0.1", 00:37:53.711 "adrfam": "ipv4", 00:37:53.711 "trsvcid": "4420", 00:37:53.711 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:53.711 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:53.711 "prchk_reftag": false, 00:37:53.711 "prchk_guard": false, 00:37:53.711 "hdgst": false, 00:37:53.711 "ddgst": false, 00:37:53.711 "method": "bdev_nvme_attach_controller", 00:37:53.711 "req_id": 1 00:37:53.711 } 00:37:53.711 Got JSON-RPC error response 00:37:53.711 response: 00:37:53.711 { 00:37:53.711 "code": -5, 00:37:53.711 "message": "Input/output error" 00:37:53.711 } 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 request: 00:37:53.711 { 00:37:53.711 "name": "nvme0", 00:37:53.711 "trtype": "tcp", 00:37:53.711 "traddr": "10.0.0.1", 00:37:53.711 "adrfam": "ipv4", 00:37:53.711 "trsvcid": "4420", 00:37:53.711 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:53.711 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:53.711 "prchk_reftag": false, 00:37:53.711 "prchk_guard": false, 00:37:53.711 "hdgst": false, 00:37:53.711 "ddgst": false, 00:37:53.711 "dhchap_key": "key2", 00:37:53.711 "method": "bdev_nvme_attach_controller", 00:37:53.711 "req_id": 1 00:37:53.711 } 00:37:53.711 Got JSON-RPC error response 00:37:53.711 response: 00:37:53.711 { 00:37:53.711 "code": -5, 00:37:53.711 "message": "Input/output error" 00:37:53.711 } 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.711 request: 00:37:53.711 { 00:37:53.711 "name": "nvme0", 00:37:53.711 "trtype": "tcp", 00:37:53.711 "traddr": "10.0.0.1", 00:37:53.711 "adrfam": "ipv4", 00:37:53.711 "trsvcid": "4420", 00:37:53.711 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:53.711 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:53.711 "prchk_reftag": false, 00:37:53.711 "prchk_guard": false, 00:37:53.711 "hdgst": false, 00:37:53.711 "ddgst": false, 00:37:53.711 "dhchap_key": "key1", 00:37:53.711 "dhchap_ctrlr_key": "ckey2", 00:37:53.711 "method": "bdev_nvme_attach_controller", 00:37:53.711 "req_id": 1 00:37:53.711 } 00:37:53.711 Got JSON-RPC error response 00:37:53.711 response: 00:37:53.711 { 00:37:53.711 "code": -5, 00:37:53.711 "message": "Input/output error" 00:37:53.711 } 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:37:53.711 10:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:53.712 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:53.712 rmmod nvme_tcp 00:37:53.972 rmmod nvme_fabrics 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2228533 ']' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2228533 ']' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2228533' 00:37:53.972 killing process with pid 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2228533 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:53.972 10:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:56.516 10:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:59.826 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:59.826 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:00.087 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:00.087 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:00.087 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:00.087 10:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ubD /tmp/spdk.key-null.9Jl /tmp/spdk.key-sha256.SDr /tmp/spdk.key-sha384.yqk /tmp/spdk.key-sha512.bhW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:38:00.087 10:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:04.391 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:04.391 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:04.391 00:38:04.391 real 0m57.873s 00:38:04.391 user 0m50.416s 00:38:04.391 sys 0m15.652s 00:38:04.391 10:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:04.391 10:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.391 ************************************ 00:38:04.391 END TEST nvmf_auth_host 00:38:04.391 ************************************ 00:38:04.391 10:54:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:04.391 10:54:09 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:38:04.391 10:54:09 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:04.391 10:54:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:04.391 10:54:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:04.391 10:54:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:04.391 ************************************ 00:38:04.391 START TEST nvmf_digest 00:38:04.391 ************************************ 00:38:04.391 10:54:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:04.391 * Looking for test storage... 00:38:04.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:04.391 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:04.391 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:38:04.392 10:54:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:12.520 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:12.520 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:12.520 Found net devices under 0000:31:00.0: cvl_0_0 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.520 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:12.520 Found net devices under 0000:31:00.1: cvl_0_1 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:12.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.757 ms 00:38:12.521 00:38:12.521 --- 10.0.0.2 ping statistics --- 00:38:12.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.521 rtt min/avg/max/mdev = 0.757/0.757/0.757/0.000 ms 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:38:12.521 00:38:12.521 --- 10.0.0.1 ping statistics --- 00:38:12.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.521 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:12.521 ************************************ 00:38:12.521 START TEST nvmf_digest_clean 00:38:12.521 ************************************ 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2246035 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2246035 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2246035 ']' 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:12.521 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:12.521 [2024-07-22 10:54:17.498866] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:12.521 [2024-07-22 10:54:17.498916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.521 EAL: No free 2048 kB hugepages reported on node 1 00:38:12.521 [2024-07-22 10:54:17.572432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.521 [2024-07-22 10:54:17.605850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.521 [2024-07-22 10:54:17.605890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.521 [2024-07-22 10:54:17.605897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.521 [2024-07-22 10:54:17.605903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.521 [2024-07-22 10:54:17.605909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.521 [2024-07-22 10:54:17.605927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:12.782 null0 00:38:12.782 [2024-07-22 10:54:18.370468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.782 [2024-07-22 10:54:18.394667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2246357 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2246357 /var/tmp/bperf.sock 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2246357 ']' 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:12.782 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:12.782 [2024-07-22 10:54:18.447449] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:12.782 [2024-07-22 10:54:18.447495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246357 ] 00:38:12.782 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.042 [2024-07-22 10:54:18.528600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.042 [2024-07-22 10:54:18.559448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.610 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:13.610 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:13.610 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:13.610 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:13.610 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:13.870 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:13.870 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:14.131 nvme0n1 00:38:14.131 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:14.131 10:54:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:14.131 Running I/O for 2 seconds... 00:38:16.675 00:38:16.675 Latency(us) 00:38:16.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:16.675 nvme0n1 : 2.00 18734.82 73.18 0.00 0.00 6824.03 3099.31 14854.83 00:38:16.675 =================================================================================================================== 00:38:16.675 Total : 18734.82 73.18 0.00 0.00 6824.03 3099.31 14854.83 00:38:16.675 0 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:16.675 | select(.opcode=="crc32c") 00:38:16.675 | "\(.module_name) \(.executed)"' 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2246357 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2246357 ']' 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2246357 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2246357 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2246357' 00:38:16.675 killing process with pid 2246357 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2246357 00:38:16.675 Received shutdown signal, test time was about 2.000000 seconds 00:38:16.675 00:38:16.675 Latency(us) 00:38:16.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.675 =================================================================================================================== 00:38:16.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:16.675 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2246357 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2247069 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2247069 /var/tmp/bperf.sock 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2247069 ']' 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:16.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:16.675 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:16.675 [2024-07-22 10:54:22.155875] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:16.675 [2024-07-22 10:54:22.155929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247069 ] 00:38:16.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:16.675 Zero copy mechanism will not be used. 00:38:16.675 EAL: No free 2048 kB hugepages reported on node 1 00:38:16.675 [2024-07-22 10:54:22.237838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.675 [2024-07-22 10:54:22.268240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:17.244 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:17.244 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:17.244 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:17.244 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:17.244 10:54:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:17.517 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:17.518 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:17.779 nvme0n1 00:38:17.779 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:17.779 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:17.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:17.779 Zero copy mechanism will not be used. 00:38:17.779 Running I/O for 2 seconds... 00:38:20.321 00:38:20.321 Latency(us) 00:38:20.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:20.321 nvme0n1 : 2.00 4095.52 511.94 0.00 0.00 3901.65 641.71 12342.61 00:38:20.321 =================================================================================================================== 00:38:20.321 Total : 4095.52 511.94 0.00 0.00 3901.65 641.71 12342.61 00:38:20.321 0 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:20.321 | select(.opcode=="crc32c") 00:38:20.321 | "\(.module_name) \(.executed)"' 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2247069 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2247069 ']' 00:38:20.321 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2247069 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247069 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247069' 00:38:20.322 killing process with pid 2247069 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2247069 00:38:20.322 Received shutdown signal, test time was about 2.000000 seconds 00:38:20.322 00:38:20.322 Latency(us) 00:38:20.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.322 =================================================================================================================== 00:38:20.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2247069 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2247753 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2247753 /var/tmp/bperf.sock 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2247753 ']' 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:20.322 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:20.322 [2024-07-22 10:54:25.781512] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:20.322 [2024-07-22 10:54:25.781582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247753 ] 00:38:20.322 EAL: No free 2048 kB hugepages reported on node 1 00:38:20.322 [2024-07-22 10:54:25.860475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.322 [2024-07-22 10:54:25.887152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.893 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:20.893 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:20.893 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:20.893 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:20.893 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:21.152 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:21.152 10:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:21.411 nvme0n1 00:38:21.411 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:21.411 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:21.671 Running I/O for 2 seconds... 00:38:23.581 00:38:23.581 Latency(us) 00:38:23.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.581 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.581 nvme0n1 : 2.01 22058.57 86.17 0.00 0.00 5794.95 2061.65 9994.24 00:38:23.581 =================================================================================================================== 00:38:23.581 Total : 22058.57 86.17 0.00 0.00 5794.95 2061.65 9994.24 00:38:23.581 0 00:38:23.582 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:23.582 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:23.582 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:23.582 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:23.582 | select(.opcode=="crc32c") 00:38:23.582 | "\(.module_name) \(.executed)"' 00:38:23.582 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2247753 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2247753 ']' 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2247753 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247753 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247753' 00:38:23.840 killing process with pid 2247753 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2247753 00:38:23.840 Received shutdown signal, test time was about 2.000000 seconds 00:38:23.840 00:38:23.840 Latency(us) 00:38:23.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.840 =================================================================================================================== 00:38:23.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2247753 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2248433 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2248433 /var/tmp/bperf.sock 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2248433 ']' 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:23.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:23.840 10:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:24.100 [2024-07-22 10:54:29.562830] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:24.100 [2024-07-22 10:54:29.562884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248433 ] 00:38:24.100 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:24.100 Zero copy mechanism will not be used. 00:38:24.100 EAL: No free 2048 kB hugepages reported on node 1 00:38:24.100 [2024-07-22 10:54:29.641577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.100 [2024-07-22 10:54:29.670192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.683 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:24.684 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:24.684 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:24.684 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:24.684 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:24.943 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.943 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:25.203 nvme0n1 00:38:25.203 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:25.203 10:54:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:25.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:25.203 Zero copy mechanism will not be used. 00:38:25.203 Running I/O for 2 seconds... 00:38:27.742 00:38:27.742 Latency(us) 00:38:27.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.742 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:27.742 nvme0n1 : 2.00 3655.97 457.00 0.00 0.00 4369.54 2007.04 9065.81 00:38:27.742 =================================================================================================================== 00:38:27.742 Total : 3655.97 457.00 0.00 0.00 4369.54 2007.04 9065.81 00:38:27.742 0 00:38:27.742 10:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:27.742 10:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:27.742 10:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:27.742 10:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:27.742 | select(.opcode=="crc32c") 00:38:27.742 | "\(.module_name) \(.executed)"' 00:38:27.742 10:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2248433 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2248433 ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2248433 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2248433 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2248433' 00:38:27.742 killing process with pid 2248433 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2248433 00:38:27.742 Received shutdown signal, test time was about 2.000000 seconds 00:38:27.742 00:38:27.742 Latency(us) 00:38:27.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.742 =================================================================================================================== 00:38:27.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2248433 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2246035 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2246035 ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2246035 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2246035 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2246035' 00:38:27.742 killing process with pid 2246035 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2246035 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2246035 00:38:27.742 00:38:27.742 real 0m15.961s 00:38:27.742 user 0m31.287s 00:38:27.742 sys 0m3.301s 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:27.742 ************************************ 00:38:27.742 END TEST nvmf_digest_clean 00:38:27.742 ************************************ 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:27.742 10:54:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:28.003 ************************************ 00:38:28.003 START TEST nvmf_digest_error 00:38:28.003 ************************************ 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2249146 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2249146 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2249146 ']' 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:28.003 10:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.003 [2024-07-22 10:54:33.516516] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:28.003 [2024-07-22 10:54:33.516569] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.003 EAL: No free 2048 kB hugepages reported on node 1 00:38:28.003 [2024-07-22 10:54:33.587739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.003 [2024-07-22 10:54:33.616865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.003 [2024-07-22 10:54:33.616904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.003 [2024-07-22 10:54:33.616912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.003 [2024-07-22 10:54:33.616918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.003 [2024-07-22 10:54:33.616923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.003 [2024-07-22 10:54:33.616947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.955 [2024-07-22 10:54:34.334985] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.955 null0 00:38:28.955 [2024-07-22 10:54:34.405469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.955 [2024-07-22 10:54:34.429662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2249427 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2249427 /var/tmp/bperf.sock 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2249427 ']' 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:28.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.955 10:54:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:28.955 [2024-07-22 10:54:34.483158] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:28.955 [2024-07-22 10:54:34.483205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249427 ] 00:38:28.955 EAL: No free 2048 kB hugepages reported on node 1 00:38:28.955 [2024-07-22 10:54:34.560560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.955 [2024-07-22 10:54:34.589257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:29.894 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:30.154 nvme0n1 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:30.154 10:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:30.154 Running I/O for 2 seconds... 00:38:30.155 [2024-07-22 10:54:35.760311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.760343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.760360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.772428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.772452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.772463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.783920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.783941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.796186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.796215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.809028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.809057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.821118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.821135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.821145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.831456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.831473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.831483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.155 [2024-07-22 10:54:35.845325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.155 [2024-07-22 10:54:35.845343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.155 [2024-07-22 10:54:35.845352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.856560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.856579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.856588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.869774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.869794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.869804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.883914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.883933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.883942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.894986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.895004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.895013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.908337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.908354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.908364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.919791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.919809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.919818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.929981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.930000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.930009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.945416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.945443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.957135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.957152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.957162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.969317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.969335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.969348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.980819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.980838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.980847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:35.993866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:35.993884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:35.993894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.006901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.006919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.006928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.017839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.017857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.017867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.031173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.031191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.031201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.044130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.044148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.044157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.054628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.054646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.054655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.068001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.068019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.068028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.080377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.080401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.080411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.094361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.094379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.094388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.416 [2024-07-22 10:54:36.106140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.416 [2024-07-22 10:54:36.106158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.416 [2024-07-22 10:54:36.106167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.117616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.117634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.117644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.129567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.129594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.142446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.142464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.142473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.155930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.168384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.168408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.168417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.181139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.181157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.181167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.193089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.193108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.193117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.206267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.206285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.206295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.218331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.218349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.218358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.231907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.231925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.231934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.242436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.242454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.242463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.255871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.255889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.267530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.267548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.267557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.279914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.279932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.279942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.290445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.290463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.304244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.304262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.304272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.315879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.315898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.315907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.328052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.328070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.678 [2024-07-22 10:54:36.328080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.678 [2024-07-22 10:54:36.341959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.678 [2024-07-22 10:54:36.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.679 [2024-07-22 10:54:36.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.679 [2024-07-22 10:54:36.353446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.679 [2024-07-22 10:54:36.353464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.679 [2024-07-22 10:54:36.353473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.679 [2024-07-22 10:54:36.365697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.679 [2024-07-22 10:54:36.365716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.679 [2024-07-22 10:54:36.365725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.377158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.377176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.377185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.390330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.390349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.390358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.403723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.403746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.413857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.413874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.413884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.428046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.428064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.428073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.439022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.439040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.439049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.450931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.450949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.450958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.463584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.463611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.476907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.476926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.476935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.487284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.487301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.487310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.501523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.501540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.501549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.512958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.512976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.512985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.526803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.526821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.526830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.539633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.539650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.539659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.552598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.552618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.552627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.563794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.939 [2024-07-22 10:54:36.563812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.939 [2024-07-22 10:54:36.563821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.939 [2024-07-22 10:54:36.575647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.940 [2024-07-22 10:54:36.575665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.940 [2024-07-22 10:54:36.575674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.940 [2024-07-22 10:54:36.587530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.940 [2024-07-22 10:54:36.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.940 [2024-07-22 10:54:36.587557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.940 [2024-07-22 10:54:36.600591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.940 [2024-07-22 10:54:36.600608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.940 [2024-07-22 10:54:36.600617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.940 [2024-07-22 10:54:36.612722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.940 [2024-07-22 10:54:36.612739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.940 [2024-07-22 10:54:36.612751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.940 [2024-07-22 10:54:36.625131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:30.940 [2024-07-22 10:54:36.625148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.940 [2024-07-22 10:54:36.625157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.199 [2024-07-22 10:54:36.637856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.199 [2024-07-22 10:54:36.637874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.199 [2024-07-22 10:54:36.637883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.199 [2024-07-22 10:54:36.649686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.199 [2024-07-22 10:54:36.649703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.199 [2024-07-22 10:54:36.649712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.199 [2024-07-22 10:54:36.662358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.199 [2024-07-22 10:54:36.662376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.662385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.675217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.675235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.675243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.685979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.685996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.686005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.698250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.698268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.698277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.712078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.712096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.712105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.724453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.724471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.724480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.735274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.735292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.735301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.747662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.747680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.761221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.761238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.761248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.774467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.774485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.774495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.784079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.784097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.797477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.797496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.810100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.810119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.810128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.822969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.822987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.823000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.834571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.834589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.834598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.845848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.845865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.845874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.858301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.858319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.858328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.871267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.871285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.871294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.200 [2024-07-22 10:54:36.884707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.200 [2024-07-22 10:54:36.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.200 [2024-07-22 10:54:36.884733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.898184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.898202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.898212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.910992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.911009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.921632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.921650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.921659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.934038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.934059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.934068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.946505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.946532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.960297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.960315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.460 [2024-07-22 10:54:36.960325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.460 [2024-07-22 10:54:36.971826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.460 [2024-07-22 10:54:36.971844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:36.971853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:36.982191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:36.982209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:36.982218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:36.995598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:36.995616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:36.995625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.007843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.007860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.007869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.020258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.020275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.020284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.030949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.030967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.030976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.044678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.044696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.044704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.057412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.057430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.057439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.069830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.069848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.069857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.082102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.082119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.082128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.094918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.094937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.094946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.106767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.106785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.106794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.119463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.119481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.119490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.132487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.132514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.142479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.461 [2024-07-22 10:54:37.154632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.461 [2024-07-22 10:54:37.154650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.461 [2024-07-22 10:54:37.154659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.721 [2024-07-22 10:54:37.168889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.721 [2024-07-22 10:54:37.168907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.721 [2024-07-22 10:54:37.168915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.721 [2024-07-22 10:54:37.181767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.721 [2024-07-22 10:54:37.181785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.721 [2024-07-22 10:54:37.181794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.721 [2024-07-22 10:54:37.193410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.721 [2024-07-22 10:54:37.193428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.721 [2024-07-22 10:54:37.193437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.204384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.204406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.204416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.216619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.216638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.216647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.228973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.228991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.229000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.242156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.242174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.242183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.254051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.254072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.254081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.266964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.266982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.266992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.279501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.279518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.279527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.292256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.292274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.292283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.304832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.304849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.304858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.315994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.316011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.327934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.327952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.327961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.341500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.341517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.341527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.351436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.351454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.351463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.364479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.364497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.364506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.376225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.376243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.376253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.390388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.390415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.390425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.403286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.403304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.403313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.722 [2024-07-22 10:54:37.415304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.722 [2024-07-22 10:54:37.415321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.722 [2024-07-22 10:54:37.415330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.427216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.427234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.427243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.439247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.439265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.439274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.450873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.450893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.450903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.464843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.464862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.464875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.474311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.474329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.474339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.489176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.489195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.489204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.502470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.502489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.502498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.514528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.514546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.527627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.527644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.527654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.537323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.537341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.537350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.550684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.550703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.550712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.563763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.563782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.563791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.576672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.576699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.589699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.589718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.589727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.601261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.601279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.601288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.613859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.613878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.613887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.624983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.625001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.625010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.637886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.982 [2024-07-22 10:54:37.637905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.982 [2024-07-22 10:54:37.637914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.982 [2024-07-22 10:54:37.650212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.983 [2024-07-22 10:54:37.650230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.983 [2024-07-22 10:54:37.650238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.983 [2024-07-22 10:54:37.662241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.983 [2024-07-22 10:54:37.662259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.983 [2024-07-22 10:54:37.662268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.983 [2024-07-22 10:54:37.674857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:31.983 [2024-07-22 10:54:37.674875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.983 [2024-07-22 10:54:37.674888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.242 [2024-07-22 10:54:37.687676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:32.242 [2024-07-22 10:54:37.687694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.242 [2024-07-22 10:54:37.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.242 [2024-07-22 10:54:37.698856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:32.242 [2024-07-22 10:54:37.698873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.242 [2024-07-22 10:54:37.698882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.242 [2024-07-22 10:54:37.712098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:32.243 [2024-07-22 10:54:37.712116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.243 [2024-07-22 10:54:37.712125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.243 [2024-07-22 10:54:37.724052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:32.243 [2024-07-22 10:54:37.724070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.243 [2024-07-22 10:54:37.724079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.243 [2024-07-22 10:54:37.735904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa009e0) 00:38:32.243 [2024-07-22 10:54:37.735921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.243 [2024-07-22 10:54:37.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.243 00:38:32.243 Latency(us) 00:38:32.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:32.243 nvme0n1 : 2.00 20588.34 80.42 0.00 0.00 6209.53 2211.84 19223.89 00:38:32.243 =================================================================================================================== 00:38:32.243 Total : 20588.34 80.42 0.00 0.00 6209.53 2211.84 19223.89 00:38:32.243 0 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:32.243 | .driver_specific 00:38:32.243 | .nvme_error 00:38:32.243 | .status_code 00:38:32.243 | .command_transient_transport_error' 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2249427 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2249427 ']' 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2249427 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:32.243 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249427 00:38:32.502 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:32.502 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:32.502 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249427' 00:38:32.502 killing process with pid 2249427 00:38:32.502 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2249427 00:38:32.502 Received shutdown signal, test time was about 2.000000 seconds 00:38:32.502 00:38:32.502 Latency(us) 00:38:32.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.502 =================================================================================================================== 00:38:32.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:32.502 10:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2249427 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2250123 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2250123 /var/tmp/bperf.sock 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2250123 ']' 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:32.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:32.502 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:32.502 [2024-07-22 10:54:38.137511] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:32.502 [2024-07-22 10:54:38.137572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250123 ] 00:38:32.502 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:32.502 Zero copy mechanism will not be used. 00:38:32.502 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.761 [2024-07-22 10:54:38.216016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.761 [2024-07-22 10:54:38.244445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.434 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:33.434 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:33.434 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:33.434 10:54:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:33.434 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:33.694 nvme0n1 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:33.694 10:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:33.954 Zero copy mechanism will not be used. 00:38:33.954 Running I/O for 2 seconds... 00:38:33.954 [2024-07-22 10:54:39.416357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.416390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.416407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.429001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.429028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.429038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.441504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.441526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.441536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.454832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.454852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.454862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.467901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.481170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.481190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.481199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.494497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.494517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.507716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.954 [2024-07-22 10:54:39.507735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.954 [2024-07-22 10:54:39.507745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:33.954 [2024-07-22 10:54:39.519881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.519901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.528273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.528303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.538745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.538765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.538774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.549879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.549898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.549907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.558663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.558683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.558696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.568154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.568174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.568183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.577921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.577941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.577950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.587387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.587421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.596845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.596866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.596875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.606791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.606811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.606820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.616935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.616956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.616965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.625149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.625169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.625178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.635915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.635935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.635944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:33.955 [2024-07-22 10:54:39.645046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:33.955 [2024-07-22 10:54:39.645069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:33.955 [2024-07-22 10:54:39.645078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.655044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.655064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.655073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.663026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.663046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.663055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.672207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.672236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.679008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.679036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.687983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.688002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.688011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.697293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.697314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.697323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.708736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.708757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.718491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.718511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.718520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.727216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.727236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.727245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.737140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.737160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.737169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.746608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.746629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.746638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.756384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.756409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.756419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.766616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.766636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.766645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.775369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.775390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.775403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.783055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.783075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.783084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.793963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.793993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.806105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.806125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.806138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.815892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.815911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.815920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.824632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.824652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.824661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.833637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.833657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.833666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.845240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.845259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.215 [2024-07-22 10:54:39.845269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.215 [2024-07-22 10:54:39.854099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.215 [2024-07-22 10:54:39.854118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.862650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.862671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.862680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.870766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.870786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.870795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.880473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.880493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.880502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.891387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.891425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.902319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.902339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.216 [2024-07-22 10:54:39.911005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.216 [2024-07-22 10:54:39.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.216 [2024-07-22 10:54:39.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.921674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.921702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.930676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.930695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.930705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.938043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.938062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.938071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.947737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.947760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.947770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.957304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.957325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.957335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.967379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.967405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.967415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.976221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.976242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.976251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.986348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.986369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.986379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:39.994981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:39.995001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:39.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.005305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.005325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.014309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.014329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.014339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.024282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.024312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.034230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.034250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.034260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.043777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.043806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.054408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.054432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.054443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.064280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.064300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.064308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.074813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.074833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.074842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.085979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.085999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.086008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.095500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.095519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.095528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.105103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.105123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.105132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.114941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.114961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.114970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.124455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.124475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.124484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.135029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.135049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.135058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.144377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.144402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.144412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.154195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.154216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.154225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.165080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.165100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.165109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.477 [2024-07-22 10:54:40.175124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.477 [2024-07-22 10:54:40.175143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.477 [2024-07-22 10:54:40.175152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.183730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.183750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.183759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.190715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.190735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.190743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.202762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.202781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.202790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.211749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.211768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.211777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.220985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.221004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.221017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.232036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.232056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.232065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.241623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.241643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.241652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.250940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.250960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.250969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.261482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.261502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.261510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.271001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.271020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.271029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.280538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.280557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.280566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.291098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.291126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.301302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.301322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.301331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.311213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.311235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.311244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.321784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.321804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.321813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.330448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.330468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.330477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.342118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.342137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.342146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.352207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.352226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.352235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.363415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.363434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.363443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.374020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.374039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.374048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.384134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.384153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.384162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.392594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.392613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.392622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.402289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.402308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.402317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.410691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.410711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.410719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.422255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.422275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.422284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.738 [2024-07-22 10:54:40.433593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.738 [2024-07-22 10:54:40.433616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.738 [2024-07-22 10:54:40.433627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.999 [2024-07-22 10:54:40.442737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.999 [2024-07-22 10:54:40.442758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.999 [2024-07-22 10:54:40.442767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.999 [2024-07-22 10:54:40.450836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.999 [2024-07-22 10:54:40.450855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.999 [2024-07-22 10:54:40.450865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.999 [2024-07-22 10:54:40.461238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.999 [2024-07-22 10:54:40.461257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.999 [2024-07-22 10:54:40.461266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:34.999 [2024-07-22 10:54:40.470554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:34.999 [2024-07-22 10:54:40.470574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.999 [2024-07-22 10:54:40.470583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.999 [2024-07-22 10:54:40.478587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.478606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.478620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.488215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.488235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.488243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.498741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.498761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.498770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.506606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.506626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.506635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.516483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.516503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.516512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.525701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.525721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.535157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.535176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.535185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.546742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.546761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.546770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.556292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.556312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.556321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.565848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.565871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.565879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.575663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.575683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.575693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.585986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.586006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.586015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.596176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.596196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.596205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.606849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.606869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.606879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.616584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.616604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.616613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.624955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.624974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.624983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.635069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.635089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.635098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.644345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.644365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.654771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.654790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.666034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.666053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.666061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.675830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.675850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.675860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.685316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.685335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.685345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.000 [2024-07-22 10:54:40.696789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.000 [2024-07-22 10:54:40.696809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.000 [2024-07-22 10:54:40.696818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.705596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.705616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.705625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.716991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.717011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.717020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.729182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.729202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.729211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.738646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.738669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.738677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.749401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.749421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.749429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.760617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.760637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.760646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.771059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.771078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.771087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.779288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.779307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.779316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.788979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.788999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.789008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.798823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.798843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.808957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.808977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.808986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.818364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.818384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.818393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.828686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.828706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.828715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.838851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.838871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.838880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.849971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.849991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.850000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.858901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.858921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.858930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.871638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.871658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.871667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.882954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.882973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.882981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.893932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.893951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.893960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.904953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.904972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.904980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.914733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.914752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.914765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.923754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.923774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.923783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.931486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.931506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.931515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.941007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.941027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.941036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.946927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.946946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.946955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.261 [2024-07-22 10:54:40.957123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.261 [2024-07-22 10:54:40.957143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.261 [2024-07-22 10:54:40.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:40.964432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:40.964452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:40.964460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:40.973666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:40.973685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:40.973694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:40.984259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:40.984278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:40.984287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:40.994487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:40.994511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:40.994520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.003474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.003493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.003502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.011477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.011498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.011507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.020746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.020775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.030714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.030734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.030744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.041539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.041559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.041568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.049981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.050001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.050010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.059527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.059547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.059556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.071304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.071325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.071334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.082086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.082106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.082115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.092287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.092307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.522 [2024-07-22 10:54:41.092315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.522 [2024-07-22 10:54:41.101214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.522 [2024-07-22 10:54:41.101234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.101243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.109658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.109678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.109687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.120056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.120076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.120085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.129343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.129363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.129372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.139221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.139241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.139250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.148101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.148121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.148130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.156926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.156946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.156958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.165160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.165180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.165190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.175133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.175153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.175161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.185172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.185191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.194860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.194880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.194889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.206483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.206502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.206511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.523 [2024-07-22 10:54:41.214431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.523 [2024-07-22 10:54:41.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.523 [2024-07-22 10:54:41.214460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.227318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.227338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.227348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.240138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.240158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.240167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.249285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.249308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.249317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.260579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.260599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.260608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.269596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.269617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.269626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.278599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.278618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.278627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.287087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.287107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.287116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.297754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.297774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.297783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.307255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.307275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.307284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.316542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.316561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.316570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.326126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.326146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.326155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.335797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.335817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.335827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.346509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.346528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.346537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.355844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.355874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.363419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.363438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.372012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.372032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.784 [2024-07-22 10:54:41.372040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:35.784 [2024-07-22 10:54:41.379214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.784 [2024-07-22 10:54:41.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.785 [2024-07-22 10:54:41.379242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.785 [2024-07-22 10:54:41.388266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.785 [2024-07-22 10:54:41.388286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.785 [2024-07-22 10:54:41.388296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:35.785 [2024-07-22 10:54:41.397908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23db460) 00:38:35.785 [2024-07-22 10:54:41.397928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.785 [2024-07-22 10:54:41.397937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:35.785 00:38:35.785 Latency(us) 00:38:35.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:35.785 nvme0n1 : 2.00 3143.60 392.95 0.00 0.00 5086.80 1221.97 14308.69 00:38:35.785 =================================================================================================================== 00:38:35.785 Total : 3143.60 392.95 0.00 0.00 5086.80 1221.97 14308.69 00:38:35.785 0 00:38:35.785 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:35.785 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:35.785 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:35.785 | .driver_specific 00:38:35.785 | .nvme_error 00:38:35.785 | .status_code 00:38:35.785 | .command_transient_transport_error' 00:38:35.785 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2250123 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2250123 ']' 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2250123 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250123 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250123' 00:38:36.044 killing process with pid 2250123 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2250123 00:38:36.044 Received shutdown signal, test time was about 2.000000 seconds 00:38:36.044 00:38:36.044 Latency(us) 00:38:36.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.044 =================================================================================================================== 00:38:36.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:36.044 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2250123 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2250839 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2250839 /var/tmp/bperf.sock 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2250839 ']' 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:36.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:36.304 10:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:36.304 [2024-07-22 10:54:41.792818] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:36.304 [2024-07-22 10:54:41.792873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250839 ] 00:38:36.304 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.304 [2024-07-22 10:54:41.869908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.304 [2024-07-22 10:54:41.898358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.874 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:36.874 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:36.874 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:36.874 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:37.134 10:54:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:37.393 nvme0n1 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:37.393 10:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:37.657 Running I/O for 2 seconds... 00:38:37.657 [2024-07-22 10:54:43.163164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ebfd0 00:38:37.657 [2024-07-22 10:54:43.164942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.164966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.173831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e7818 00:38:37.657 [2024-07-22 10:54:43.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.175125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.187264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e88f8 00:38:37.657 [2024-07-22 10:54:43.189209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.189227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.197883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed0b0 00:38:37.657 [2024-07-22 10:54:43.199360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.199377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.208967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.210371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.221460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.222909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.222926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.233143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.234581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.234597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.244856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.246318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.246334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.256584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.258031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.258047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.268282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.269734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.269750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.279975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.281426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.281442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.291736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.293191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.303440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.304885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.304902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.315130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.316577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.316593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.326818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.328272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.328288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.338518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.339959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.339975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.657 [2024-07-22 10:54:43.350200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.657 [2024-07-22 10:54:43.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.657 [2024-07-22 10:54:43.351668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.361933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.975 [2024-07-22 10:54:43.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.363403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.373628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.975 [2024-07-22 10:54:43.375072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.375087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.385319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.975 [2024-07-22 10:54:43.386773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.386789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.397061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed920 00:38:37.975 [2024-07-22 10:54:43.398394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.398421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.407905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ee190 00:38:37.975 [2024-07-22 10:54:43.409317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.409333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.420435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ed0b0 00:38:37.975 [2024-07-22 10:54:43.421864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.421880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.432156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190ebfd0 00:38:37.975 [2024-07-22 10:54:43.433581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.433597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.443842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.445281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.445297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.455550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.456942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.456958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.467241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.468667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.468684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.478944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.480377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.480400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.490624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.492038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.492054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.502362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.503793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.503810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.514049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.515476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.515492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.525881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.527292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.527309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.537577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.539011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.539028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.549275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.550699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.550715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.560965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.562398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.562414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.572648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.574078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.574094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.584468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.975 [2024-07-22 10:54:43.585906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.975 [2024-07-22 10:54:43.596158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.975 [2024-07-22 10:54:43.597557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.597573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.607843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.609268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.609284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.619523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.620960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.620976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.631202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.632635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.632651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.642897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.644322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.644338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.654573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.656016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:37.976 [2024-07-22 10:54:43.666267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:37.976 [2024-07-22 10:54:43.667687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:37.976 [2024-07-22 10:54:43.667703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.677948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.679369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.679385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.689642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.691064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.691080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.701308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.702733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.702748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.713026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.714421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.714437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.724697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.726129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.726145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.736368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.737795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.737811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.748046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.749470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.749486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.759926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.761352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.761368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.771598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.773021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.773037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.783279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.784710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.784729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.794962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.796408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.806656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.808076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.808092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.818323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.819758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.819774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.830004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.831427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.831443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.841685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.843103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.843119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.853365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.854797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.854812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.865053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.866482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.866498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.876740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.878162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.878178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.888404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.284 [2024-07-22 10:54:43.889834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.284 [2024-07-22 10:54:43.889850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.284 [2024-07-22 10:54:43.900083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.901501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.901517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.911773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.913193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.913209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.923463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.924888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.924904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.935131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.936557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.946811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.948231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.948247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.958503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.959929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.959944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.970182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.285 [2024-07-22 10:54:43.971592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.285 [2024-07-22 10:54:43.971608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.285 [2024-07-22 10:54:43.981863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:43.983282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:43.983298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:43.993548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:43.994950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:43.994966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.005226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.006664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.006680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.016903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.018326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.018342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.028570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.029990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.030006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.040250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.041642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.051942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.053370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.063632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.065052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.065069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.075306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.076736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.076752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.086984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.088433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.098658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.100082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.100098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.110337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.111766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.111782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.122048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.123475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.123491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.133747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.135178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.145410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.146828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.146844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.157081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.158519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.168754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.170193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.170210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.180429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.181849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.181866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.546 [2024-07-22 10:54:44.192113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.546 [2024-07-22 10:54:44.193504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.546 [2024-07-22 10:54:44.193520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.547 [2024-07-22 10:54:44.203810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.547 [2024-07-22 10:54:44.205236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.547 [2024-07-22 10:54:44.205252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.547 [2024-07-22 10:54:44.215477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.547 [2024-07-22 10:54:44.216907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.547 [2024-07-22 10:54:44.216923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.547 [2024-07-22 10:54:44.227169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.547 [2024-07-22 10:54:44.228596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.547 [2024-07-22 10:54:44.228611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.547 [2024-07-22 10:54:44.238850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.547 [2024-07-22 10:54:44.240273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.547 [2024-07-22 10:54:44.240289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.250533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.251956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.251972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.262197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.263599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.263615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.273902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.275324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.285573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.287001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.287017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.297260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.298691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.308930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.310372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.320623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.322052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.322067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.332326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.333756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.333772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.344005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.345423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.345438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.355669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.807 [2024-07-22 10:54:44.357061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.807 [2024-07-22 10:54:44.357077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.807 [2024-07-22 10:54:44.367359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.368784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.368800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.379023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.380447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.380463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.390699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.392122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.392140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.402350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.403757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.403773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.414014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.415430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.415446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.425680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.427125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.437371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.438775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.438791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.449053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.450476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.450492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.460741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.462186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.472405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.473824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.473840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.484075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.485480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.485496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:38.808 [2024-07-22 10:54:44.495766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:38.808 [2024-07-22 10:54:44.497201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.808 [2024-07-22 10:54:44.497217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.507463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.508885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.508901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.519117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.520548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.520564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.530802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.532312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.532327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.542581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.544042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.544058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.554317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.555726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.565980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.567401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.567417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.577676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.579062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.579079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.589343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.590782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.590798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.601036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.602460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.602476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.612714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.614140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.614156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.624403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.625829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.625845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.636087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.069 [2024-07-22 10:54:44.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.069 [2024-07-22 10:54:44.637502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.069 [2024-07-22 10:54:44.647798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.649243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.659472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.660908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.660924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.671159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.672616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.672632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.682918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.684348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.684364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.694625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.696042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.696061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.706418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.707863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.707879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.718099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.719524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.719541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.729779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.731208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.741498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.742889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.742905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.753207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.754655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.070 [2024-07-22 10:54:44.765104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.070 [2024-07-22 10:54:44.766523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.070 [2024-07-22 10:54:44.766539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.776784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.778217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.778233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.788471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.789901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.789917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.800139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.801573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.801590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.811838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.813258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.823523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.824947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.824963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.835203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.836651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.836668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.846903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.848331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.848348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.858589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.860016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.860032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.870283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.871673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.871689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.881981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.883404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.883420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.893659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.895095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.905346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.906776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.906792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.917017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.918422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.918439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.928728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.930152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.930169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.940417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.941838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.941855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.952134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.953568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.953587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.963813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.965244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.965260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.975497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.976926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.976942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.987173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:44.988572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:44.988588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:44.998878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:45.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:45.000319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:45.010555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:45.011981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:45.011998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.330 [2024-07-22 10:54:45.022247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.330 [2024-07-22 10:54:45.023676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.330 [2024-07-22 10:54:45.023692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.033928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.035358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.045627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.047055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.047071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.057304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.058712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.058729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.069002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.070438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.070455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.080697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.082126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.082143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.092408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.093787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.104085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.105512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.105528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.115776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.117207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.117223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.127509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.128939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.128955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.139206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.140639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.140655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 [2024-07-22 10:54:45.150877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15959f0) with pdu=0x2000190e5658 00:38:39.590 [2024-07-22 10:54:45.152307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:39.590 [2024-07-22 10:54:45.152324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:39.590 00:38:39.590 Latency(us) 00:38:39.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:39.590 nvme0n1 : 2.00 21814.54 85.21 0.00 0.00 5859.82 2280.11 14090.24 00:38:39.590 =================================================================================================================== 00:38:39.590 Total : 21814.54 85.21 0.00 0.00 5859.82 2280.11 14090.24 00:38:39.590 0 00:38:39.590 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:39.590 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:39.590 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:39.590 | .driver_specific 00:38:39.590 | .nvme_error 00:38:39.590 | .status_code 00:38:39.590 | .command_transient_transport_error' 00:38:39.590 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2250839 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2250839 ']' 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2250839 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250839 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250839' 00:38:39.850 killing process with pid 2250839 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2250839 00:38:39.850 Received shutdown signal, test time was about 2.000000 seconds 00:38:39.850 00:38:39.850 Latency(us) 00:38:39.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.850 =================================================================================================================== 00:38:39.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2250839 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2251545 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2251545 /var/tmp/bperf.sock 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2251545 ']' 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:39.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:39.850 10:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:40.110 [2024-07-22 10:54:45.555232] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:40.110 [2024-07-22 10:54:45.555318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251545 ] 00:38:40.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:40.110 Zero copy mechanism will not be used. 00:38:40.110 EAL: No free 2048 kB hugepages reported on node 1 00:38:40.110 [2024-07-22 10:54:45.635824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.110 [2024-07-22 10:54:45.663430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:40.678 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:40.678 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:40.678 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:40.678 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:40.937 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:41.197 nvme0n1 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:41.197 10:54:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:41.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:41.197 Zero copy mechanism will not be used. 00:38:41.197 Running I/O for 2 seconds... 00:38:41.197 [2024-07-22 10:54:46.850962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.851356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.851384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.197 [2024-07-22 10:54:46.860133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.860513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.860537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.197 [2024-07-22 10:54:46.867125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.867491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.867511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.197 [2024-07-22 10:54:46.875592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.875942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.875962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.197 [2024-07-22 10:54:46.884430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.884829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.884852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.197 [2024-07-22 10:54:46.894090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.197 [2024-07-22 10:54:46.894420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.197 [2024-07-22 10:54:46.894438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.901417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.901824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.901843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.907239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.907564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.907582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.913999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.914323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.920661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.920975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.920993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.928645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.928718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.928736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.936987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.937204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.937222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.942974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.943043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.943061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.950873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.951194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.951212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.958823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.959138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.959156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.968743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.969064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.969082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.979105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.979464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.979483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.988863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.989067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.989082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:46.999636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:46.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:46.999762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.009383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.009464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.009483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.020907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.021241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.030366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.030712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.030731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.041740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.042097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.042115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.053906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.054264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.054283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.465 [2024-07-22 10:54:47.066575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.465 [2024-07-22 10:54:47.066856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.465 [2024-07-22 10:54:47.066872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.079039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.079366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.079384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.090096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.090419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.090438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.098539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.098610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.098629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.106175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.106256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.106275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.113896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.114246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.120923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.121234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.121255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.126716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.127055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.133842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.134168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.139179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.139421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.145787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.146099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.146117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.151157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.151504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.151522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.466 [2024-07-22 10:54:47.157838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.466 [2024-07-22 10:54:47.158133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.466 [2024-07-22 10:54:47.158150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.165239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.165477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.174296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.174553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.174568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.184742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.185073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.185090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.195084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.195442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.195460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.204770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.205084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.205101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.215197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.225011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.225347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.225365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.235312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.235422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.235437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.245153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.245512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.245530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.255313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.255624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.255641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.265672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.266034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.266054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.275164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.275274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.275290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.285201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.294387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.294747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.294765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.302305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.302618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.302636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.311074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.311433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.311450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.316985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.317325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.317343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.326731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.726 [2024-07-22 10:54:47.327080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.726 [2024-07-22 10:54:47.327098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.726 [2024-07-22 10:54:47.336349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.336587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.336606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.345199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.345558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.345577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.355308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.355663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.355682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.365116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.365218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.365233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.375859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.376205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.376224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.385904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.386223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.386241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.395748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.395866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.395882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.402930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.403146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.403165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.407750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.407965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.414457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.414765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.414783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.727 [2024-07-22 10:54:47.421971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.727 [2024-07-22 10:54:47.422288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.727 [2024-07-22 10:54:47.422306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.427410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.427762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.434807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.435116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.435134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.439983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.440319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.440336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.445978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.446309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.446327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.452709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.453018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.453036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.458980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.459316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.459335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.466332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.466561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.466580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.473147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.473360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.473381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.480925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.481230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.481248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.487482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.487795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.487813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.495193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.495510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.495528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.502734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.503073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.510958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.511322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.519365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.519689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.519707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.525651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.525869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.525887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.532771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.533122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.533140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.540941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.541281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.541300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.548306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.548620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.548639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.554506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.554833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.554851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.563149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.563503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.563521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.571655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.571995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.572013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.578818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.579128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.579146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.584120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.584337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.584355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.591093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.591444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.596213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.596434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.602234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.602542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.602560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.608630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.608953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.608971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.616367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.616724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.616742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.622675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.622986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.623004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.628827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.629130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.629148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.634969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.635279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.635297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.639979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.640322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.647681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.647894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.647913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.658902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.659241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.987 [2024-07-22 10:54:47.659262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.987 [2024-07-22 10:54:47.667730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.987 [2024-07-22 10:54:47.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.988 [2024-07-22 10:54:47.668102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.988 [2024-07-22 10:54:47.678466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:41.988 [2024-07-22 10:54:47.678797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.988 [2024-07-22 10:54:47.678815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.248 [2024-07-22 10:54:47.688555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.248 [2024-07-22 10:54:47.688898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.248 [2024-07-22 10:54:47.688917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.698114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.698485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.698503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.706437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.706803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.706821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.717565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.717900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.717918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.729428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.729756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.729774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.741299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.741643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.752854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.753211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.753229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.764213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.764569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.777050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.777408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.777427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.787256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.787589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.795646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.795957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.795975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.802790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.803162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.813539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.813882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.822920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.823008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.823024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.834491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.834779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.834800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.843799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.844170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.852316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.852682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.852702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.858817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.859174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.859194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.865786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.866065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.866081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.877492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.877806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.877825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.886500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.886817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.886835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.894734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.895069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.901319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.901657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.901676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.908799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.909121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.909139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.914480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.914696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.914715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.919487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.919820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.919838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.924769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.924852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.924868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.930070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.930293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.930311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.936043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.936117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.249 [2024-07-22 10:54:47.936133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.249 [2024-07-22 10:54:47.943546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.249 [2024-07-22 10:54:47.943852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.250 [2024-07-22 10:54:47.943871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.952069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.952418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.952436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.957129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.957472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.957490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.961781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.962106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.967986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.968202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.968220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.973151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.973366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.973384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.979167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.979473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.979491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.984920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.985270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.991352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.991651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.991669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:47.998096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:47.998428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:47.998445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.003633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:48.003963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:48.003982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.009816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:48.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:48.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.016368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:48.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:48.016741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.025076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:48.025387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:48.025410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.034128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.511 [2024-07-22 10:54:48.034447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.511 [2024-07-22 10:54:48.034465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.511 [2024-07-22 10:54:48.043592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.043903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.043921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.052788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.053134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.061083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.061312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.061330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.067739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.068070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.074372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.074737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.074755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.079527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.079839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.079857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.084742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.085082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.089549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.089856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.089874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.094544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.094891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.100191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.100497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.100515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.106526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.106850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.106868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.112053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.112393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.112417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.121041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.121339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.128072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.128408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.128426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.135273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.135593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.135611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.141540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.141625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.141640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.147815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.148029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.148048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.152878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.153219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.153237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.158602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.158823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.158839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.163715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.163930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.163950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.169971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.170195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.170213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.175367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.175714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.181278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.181642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.181664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.187256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.187599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.187617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.194296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.194609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.194627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.200345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.200696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.512 [2024-07-22 10:54:48.206853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.512 [2024-07-22 10:54:48.207163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.512 [2024-07-22 10:54:48.207181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.213459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.213795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.213813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.220042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.220391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.220414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.230141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.230470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.237498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.237850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.237867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.244865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.245201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.245219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.252893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.253203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.253221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.261190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.261551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.261569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.269775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.269993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.270011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.281153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.281238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.281254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.290209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.290425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.290443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.299160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.299528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.299546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.306564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.306921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.774 [2024-07-22 10:54:48.306938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.774 [2024-07-22 10:54:48.316164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.774 [2024-07-22 10:54:48.316484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.316501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.322925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.323249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.323267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.331092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.331421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.331440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.337845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.338151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.338168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.345087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.345471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.345488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.352109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.352325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.352343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.359638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.359929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.359948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.366446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.366766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.366784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.371686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.371996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.372014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.376565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.376906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.376927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.382683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.383041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.383060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.388958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.389159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.389178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.393608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.393827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.399496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.399788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.399806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.405597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.405913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.405931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.413325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.413617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.413635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.420421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.420743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.420762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.427782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.428175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.433709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.434063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.434081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.439151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.439452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.439469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.445521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.445863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.450829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.451128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.451146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.459475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.459786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:42.775 [2024-07-22 10:54:48.467284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:42.775 [2024-07-22 10:54:48.467406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:42.775 [2024-07-22 10:54:48.467423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.473846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.474051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.474070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.480885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.481118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.481135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.488057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.488378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.488405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.495911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.496212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.496230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.502376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.502603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.502621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.510657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.510946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.037 [2024-07-22 10:54:48.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.037 [2024-07-22 10:54:48.518705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.037 [2024-07-22 10:54:48.518983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.519001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.526283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.526610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.526628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.534490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.534835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.534853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.541839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.542110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.542128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.550096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.550430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.550448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.557658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.558043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.558061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.564953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.565342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.565359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.571165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.571492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.571510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.579017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.579365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.579383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.585929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.586297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.594090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.594377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.594400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.602485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.602818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.602836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.613030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.613361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.613378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.621453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.621843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.621861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.629447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.629771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.629789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.636034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.636402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.636419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.643522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.643730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.643748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.650708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.651086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.651104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.655836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.656129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.662790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.663114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.663132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.668444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.668659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.668677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.675483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.675817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.675835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.682196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.682537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.689441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.689665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.689683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.697789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.698095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.698113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.703022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.703334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.703354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.038 [2024-07-22 10:54:48.707885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.038 [2024-07-22 10:54:48.708238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.038 [2024-07-22 10:54:48.708257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.039 [2024-07-22 10:54:48.714092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.039 [2024-07-22 10:54:48.714424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.039 [2024-07-22 10:54:48.714443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.039 [2024-07-22 10:54:48.723284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.039 [2024-07-22 10:54:48.723610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.039 [2024-07-22 10:54:48.723628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.039 [2024-07-22 10:54:48.731258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.039 [2024-07-22 10:54:48.731587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.039 [2024-07-22 10:54:48.731606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.300 [2024-07-22 10:54:48.740673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.300 [2024-07-22 10:54:48.741044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.300 [2024-07-22 10:54:48.741063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.300 [2024-07-22 10:54:48.750562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.300 [2024-07-22 10:54:48.750782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.300 [2024-07-22 10:54:48.750801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.300 [2024-07-22 10:54:48.759093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.300 [2024-07-22 10:54:48.759390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.759415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.768517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.768937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.779418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.779782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.789004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.789352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.789371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.799518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.799970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.799989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.811691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.812047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.823299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.823679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.823698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.834152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.834273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:43.301 [2024-07-22 10:54:48.843952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15979e0) with pdu=0x2000190fef90 00:38:43.301 [2024-07-22 10:54:48.844029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:43.301 [2024-07-22 10:54:48.844045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:43.301 00:38:43.301 Latency(us) 00:38:43.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.301 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:43.301 nvme0n1 : 2.00 3959.82 494.98 0.00 0.00 4033.07 2007.04 13216.43 00:38:43.301 =================================================================================================================== 00:38:43.301 Total : 3959.82 494.98 0.00 0.00 4033.07 2007.04 13216.43 00:38:43.301 0 00:38:43.301 10:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:43.301 10:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:43.301 10:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:43.301 | .driver_specific 00:38:43.301 | .nvme_error 00:38:43.301 | .status_code 00:38:43.301 | .command_transient_transport_error' 00:38:43.301 10:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 256 > 0 )) 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2251545 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2251545 ']' 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2251545 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2251545 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:43.562 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2251545' 00:38:43.563 killing process with pid 2251545 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2251545 00:38:43.563 Received shutdown signal, test time was about 2.000000 seconds 00:38:43.563 00:38:43.563 Latency(us) 00:38:43.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.563 =================================================================================================================== 00:38:43.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2251545 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2249146 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2249146 ']' 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2249146 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249146 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249146' 00:38:43.563 killing process with pid 2249146 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2249146 00:38:43.563 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2249146 00:38:43.822 00:38:43.822 real 0m15.909s 00:38:43.822 user 0m31.331s 00:38:43.822 sys 0m3.246s 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:43.822 ************************************ 00:38:43.822 END TEST nvmf_digest_error 00:38:43.822 ************************************ 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:43.822 rmmod nvme_tcp 00:38:43.822 rmmod nvme_fabrics 00:38:43.822 rmmod nvme_keyring 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2249146 ']' 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2249146 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2249146 ']' 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2249146 00:38:43.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2249146) - No such process 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2249146 is not found' 00:38:43.822 Process with pid 2249146 is not found 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:43.822 10:54:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.362 10:54:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:46.362 00:38:46.362 real 0m41.945s 00:38:46.362 user 1m4.792s 00:38:46.362 sys 0m12.355s 00:38:46.362 10:54:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:46.362 10:54:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:46.362 ************************************ 00:38:46.362 END TEST nvmf_digest 00:38:46.362 ************************************ 00:38:46.362 10:54:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:46.363 10:54:51 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:38:46.363 10:54:51 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:38:46.363 10:54:51 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:38:46.363 10:54:51 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:46.363 10:54:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:46.363 10:54:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:46.363 10:54:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:46.363 ************************************ 00:38:46.363 START TEST nvmf_bdevperf 00:38:46.363 ************************************ 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:46.363 * Looking for test storage... 00:38:46.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:38:46.363 10:54:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:54.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:54.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:54.494 Found net devices under 0000:31:00.0: cvl_0_0 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:54.494 Found net devices under 0000:31:00.1: cvl_0_1 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:54.494 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:54.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:38:54.495 00:38:54.495 --- 10.0.0.2 ping statistics --- 00:38:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.495 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:38:54.495 00:38:54.495 --- 10.0.0.1 ping statistics --- 00:38:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.495 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2256900 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2256900 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2256900 ']' 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:54.495 10:54:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:54.495 [2024-07-22 10:55:00.012528] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:54.495 [2024-07-22 10:55:00.012596] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.495 EAL: No free 2048 kB hugepages reported on node 1 00:38:54.495 [2024-07-22 10:55:00.113032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:54.495 [2024-07-22 10:55:00.162841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.495 [2024-07-22 10:55:00.162898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.495 [2024-07-22 10:55:00.162910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.495 [2024-07-22 10:55:00.162921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.495 [2024-07-22 10:55:00.162929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.495 [2024-07-22 10:55:00.163076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:54.495 [2024-07-22 10:55:00.163241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.495 [2024-07-22 10:55:00.163240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 [2024-07-22 10:55:00.842492] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 Malloc0 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:55.436 [2024-07-22 10:55:00.908872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:55.436 { 00:38:55.436 "params": { 00:38:55.436 "name": "Nvme$subsystem", 00:38:55.436 "trtype": "$TEST_TRANSPORT", 00:38:55.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.436 "adrfam": "ipv4", 00:38:55.436 "trsvcid": "$NVMF_PORT", 00:38:55.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.436 "hdgst": ${hdgst:-false}, 00:38:55.436 "ddgst": ${ddgst:-false} 00:38:55.436 }, 00:38:55.436 "method": "bdev_nvme_attach_controller" 00:38:55.436 } 00:38:55.436 EOF 00:38:55.436 )") 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:55.436 10:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:55.436 "params": { 00:38:55.436 "name": "Nvme1", 00:38:55.436 "trtype": "tcp", 00:38:55.436 "traddr": "10.0.0.2", 00:38:55.436 "adrfam": "ipv4", 00:38:55.436 "trsvcid": "4420", 00:38:55.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:55.436 "hdgst": false, 00:38:55.436 "ddgst": false 00:38:55.436 }, 00:38:55.436 "method": "bdev_nvme_attach_controller" 00:38:55.436 }' 00:38:55.436 [2024-07-22 10:55:00.963156] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:55.436 [2024-07-22 10:55:00.963204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256966 ] 00:38:55.436 EAL: No free 2048 kB hugepages reported on node 1 00:38:55.436 [2024-07-22 10:55:01.026195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.436 [2024-07-22 10:55:01.057301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.696 Running I/O for 1 seconds... 00:38:57.092 00:38:57.092 Latency(us) 00:38:57.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.092 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:57.092 Verification LBA range: start 0x0 length 0x4000 00:38:57.092 Nvme1n1 : 1.01 9110.28 35.59 0.00 0.00 13993.16 3031.04 16384.00 00:38:57.092 =================================================================================================================== 00:38:57.092 Total : 9110.28 35.59 0.00 0.00 13993.16 3031.04 16384.00 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2257308 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:57.092 { 00:38:57.092 "params": { 00:38:57.092 "name": "Nvme$subsystem", 00:38:57.092 "trtype": "$TEST_TRANSPORT", 00:38:57.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:57.092 "adrfam": "ipv4", 00:38:57.092 "trsvcid": "$NVMF_PORT", 00:38:57.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:57.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:57.092 "hdgst": ${hdgst:-false}, 00:38:57.092 "ddgst": ${ddgst:-false} 00:38:57.092 }, 00:38:57.092 "method": "bdev_nvme_attach_controller" 00:38:57.092 } 00:38:57.092 EOF 00:38:57.092 )") 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:57.092 10:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:57.092 "params": { 00:38:57.092 "name": "Nvme1", 00:38:57.092 "trtype": "tcp", 00:38:57.092 "traddr": "10.0.0.2", 00:38:57.092 "adrfam": "ipv4", 00:38:57.092 "trsvcid": "4420", 00:38:57.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:57.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:57.092 "hdgst": false, 00:38:57.092 "ddgst": false 00:38:57.092 }, 00:38:57.092 "method": "bdev_nvme_attach_controller" 00:38:57.092 }' 00:38:57.092 [2024-07-22 10:55:02.529788] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:38:57.092 [2024-07-22 10:55:02.529844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257308 ] 00:38:57.092 EAL: No free 2048 kB hugepages reported on node 1 00:38:57.092 [2024-07-22 10:55:02.594070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.092 [2024-07-22 10:55:02.623705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.092 Running I/O for 15 seconds... 00:39:00.387 10:55:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2256900 00:39:00.387 10:55:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:00.387 [2024-07-22 10:55:05.500063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.387 [2024-07-22 10:55:05.500477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.387 [2024-07-22 10:55:05.500486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.500965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.500981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.500990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.500998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.501014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.501030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.501048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.388 [2024-07-22 10:55:05.501065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.388 [2024-07-22 10:55:05.501175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.388 [2024-07-22 10:55:05.501182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:00.389 [2024-07-22 10:55:05.501764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.389 [2024-07-22 10:55:05.501880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.389 [2024-07-22 10:55:05.501889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.501992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.501999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.390 [2024-07-22 10:55:05.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10862b0 is same with the state(5) to be set 00:39:00.390 [2024-07-22 10:55:05.502282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:00.390 [2024-07-22 10:55:05.502288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:00.390 [2024-07-22 10:55:05.502294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112736 len:8 PRP1 0x0 PRP2 0x0 00:39:00.390 [2024-07-22 10:55:05.502303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:00.390 [2024-07-22 10:55:05.502339] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10862b0 was disconnected and freed. reset controller. 00:39:00.390 [2024-07-22 10:55:05.505883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.390 [2024-07-22 10:55:05.505932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.390 [2024-07-22 10:55:05.506771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.390 [2024-07-22 10:55:05.506809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.390 [2024-07-22 10:55:05.506820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.390 [2024-07-22 10:55:05.507061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.390 [2024-07-22 10:55:05.507282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.390 [2024-07-22 10:55:05.507291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.390 [2024-07-22 10:55:05.507300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.390 [2024-07-22 10:55:05.510821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.390 [2024-07-22 10:55:05.519911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.390 [2024-07-22 10:55:05.520448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.390 [2024-07-22 10:55:05.520485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.390 [2024-07-22 10:55:05.520497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.390 [2024-07-22 10:55:05.520738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.390 [2024-07-22 10:55:05.520959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.390 [2024-07-22 10:55:05.520967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.390 [2024-07-22 10:55:05.520976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.390 [2024-07-22 10:55:05.524501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.390 [2024-07-22 10:55:05.533822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.390 [2024-07-22 10:55:05.534402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.390 [2024-07-22 10:55:05.534421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.390 [2024-07-22 10:55:05.534429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.390 [2024-07-22 10:55:05.534647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.390 [2024-07-22 10:55:05.534863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.390 [2024-07-22 10:55:05.534871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.390 [2024-07-22 10:55:05.534878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.390 [2024-07-22 10:55:05.538430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.390 [2024-07-22 10:55:05.547575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.390 [2024-07-22 10:55:05.548207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.390 [2024-07-22 10:55:05.548244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.390 [2024-07-22 10:55:05.548254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.390 [2024-07-22 10:55:05.548503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.390 [2024-07-22 10:55:05.548725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.390 [2024-07-22 10:55:05.548733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.390 [2024-07-22 10:55:05.548740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.390 [2024-07-22 10:55:05.552251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.390 [2024-07-22 10:55:05.561367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.390 [2024-07-22 10:55:05.562059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.562096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.562107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.562345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.562573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.562583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.562590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.566101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.575194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.575845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.575881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.575892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.576130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.576351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.576359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.576367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.579883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.588972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.589655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.589691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.589702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.589944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.590164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.590173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.590180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.593695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.602790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.603458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.603495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.603506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.603744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.603964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.603973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.603980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.607495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.616590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.617255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.617292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.617302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.617549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.617770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.617779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.617786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.621293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.630384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.631010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.631046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.631057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.631294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.631522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.631532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.631543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.635067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.644158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.644753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.644789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.644800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.645037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.645257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.645266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.645274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.648789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.658104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.658781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.658817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.658828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.659065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.659286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.659294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.391 [2024-07-22 10:55:05.659301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.391 [2024-07-22 10:55:05.662817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.391 [2024-07-22 10:55:05.671912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.391 [2024-07-22 10:55:05.672506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.391 [2024-07-22 10:55:05.672542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.391 [2024-07-22 10:55:05.672554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.391 [2024-07-22 10:55:05.672795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.391 [2024-07-22 10:55:05.673015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.391 [2024-07-22 10:55:05.673024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.673031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.676547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.685847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.686359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.686403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.686414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.686651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.686871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.686879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.686887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.690391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.699694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.700385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.700427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.700439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.700677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.700897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.700905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.700913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.704425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.713516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.714187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.714224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.714235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.714480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.714702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.714710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.714717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.718225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.727315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.727862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.727898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.727909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.728150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.728371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.728379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.728387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.731904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.741211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.741889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.741926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.741936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.742173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.742403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.742412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.742419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.745926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.755066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.755973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.756009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.756019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.756307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.756535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.756545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.756552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.760060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.768950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.769682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.769719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.769729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.769967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.770187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.770195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.770207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.773726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.782817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.783459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.783495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.783508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.783749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.783969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.783977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.783984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.787499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.796595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.797262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.797299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.797309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.797555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.797777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.797785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.797793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.801300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.810391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.811055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.811091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.811102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.811339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.392 [2024-07-22 10:55:05.811568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.392 [2024-07-22 10:55:05.811578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.392 [2024-07-22 10:55:05.811586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.392 [2024-07-22 10:55:05.815092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.392 [2024-07-22 10:55:05.824183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.392 [2024-07-22 10:55:05.824833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.392 [2024-07-22 10:55:05.824873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.392 [2024-07-22 10:55:05.824886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.392 [2024-07-22 10:55:05.825124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.825344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.825353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.825360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.828876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.837991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.838696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.838732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.838743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.838980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.839201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.839209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.839216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.842732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.851824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.852505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.852542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.852554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.852795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.853020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.853031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.853038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.856555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.865661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.866327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.866364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.866374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.866620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.866845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.866853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.866861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.870369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.879466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.880132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.880168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.880179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.880425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.880646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.880655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.880662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.884170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.893259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.893885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.893921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.893932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.894169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.894390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.894408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.894416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.897926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.907018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.907689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.907726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.907737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.907974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.908195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.908203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.908210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.911730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.920822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.921414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.921450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.921462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.921700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.921921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.921930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.921937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.925452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.934748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.935362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.935413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.935426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.935664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.935884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.935892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.935900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.939413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.948500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.949145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.949181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.949191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.949437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.949658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.949667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.949675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.953188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.962324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.962999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.963036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.963051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.963288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.393 [2024-07-22 10:55:05.963517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.393 [2024-07-22 10:55:05.963526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.393 [2024-07-22 10:55:05.963533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.393 [2024-07-22 10:55:05.967043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.393 [2024-07-22 10:55:05.976131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.393 [2024-07-22 10:55:05.976797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.393 [2024-07-22 10:55:05.976834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.393 [2024-07-22 10:55:05.976844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.393 [2024-07-22 10:55:05.977082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:05.977302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:05.977310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:05.977318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:05.980846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:05.989942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:05.990528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:05.990565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:05.990577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:05.990818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:05.991038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:05.991047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:05.991054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:05.994571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.003869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.004504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.004540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.004552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.004793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.005014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.005026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.005034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.008554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.017648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.018333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.018369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.018381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.018628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.018849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.018857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.018864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.022371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.031468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.032150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.032187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.032198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.032443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.032665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.032673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.032680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.036199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.045294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.045982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.046018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.046029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.046266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.046495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.046504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.046512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.050018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.059125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.059794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.059829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.059840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.060077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.060298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.060307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.060315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.063831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.394 [2024-07-22 10:55:06.072939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.394 [2024-07-22 10:55:06.073505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.394 [2024-07-22 10:55:06.073542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.394 [2024-07-22 10:55:06.073554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.394 [2024-07-22 10:55:06.073792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.394 [2024-07-22 10:55:06.074012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.394 [2024-07-22 10:55:06.074020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.394 [2024-07-22 10:55:06.074027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.394 [2024-07-22 10:55:06.077543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.656 [2024-07-22 10:55:06.086847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.656 [2024-07-22 10:55:06.087421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.656 [2024-07-22 10:55:06.087457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.656 [2024-07-22 10:55:06.087470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.656 [2024-07-22 10:55:06.087710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.656 [2024-07-22 10:55:06.087930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.656 [2024-07-22 10:55:06.087938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.656 [2024-07-22 10:55:06.087946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.656 [2024-07-22 10:55:06.091467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.656 [2024-07-22 10:55:06.100769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.656 [2024-07-22 10:55:06.101431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.656 [2024-07-22 10:55:06.101467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.656 [2024-07-22 10:55:06.101478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.656 [2024-07-22 10:55:06.101720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.656 [2024-07-22 10:55:06.101941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.656 [2024-07-22 10:55:06.101949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.656 [2024-07-22 10:55:06.101956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.656 [2024-07-22 10:55:06.105475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.656 [2024-07-22 10:55:06.114570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.656 [2024-07-22 10:55:06.115350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.656 [2024-07-22 10:55:06.115386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.656 [2024-07-22 10:55:06.115407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.656 [2024-07-22 10:55:06.115646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.115866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.115875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.115882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.119389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.128480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.129169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.129205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.129216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.129461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.129683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.129691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.129699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.133205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.142306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.142907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.142925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.142933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.143151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.143367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.143374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.143386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.146894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.156195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.156737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.156754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.156761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.156978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.157194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.157202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.157208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.160714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.170039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.170688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.170724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.170735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.170972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.171192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.171200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.171207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.174723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.183813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.184363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.184381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.184389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.184612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.184829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.184836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.184843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.188344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.197649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.198199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.198215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.198222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.198444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.198661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.198668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.198675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.202174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.211467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.212009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.212025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.212032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.212249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.212472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.212480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.212487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.215987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.225275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.225832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.225854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.226070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.226286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.226295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.226302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.229807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.239108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.239797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.239833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.239844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.240085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.240305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.240314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.240322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.243839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.252930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.253613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.253649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.253660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.657 [2024-07-22 10:55:06.253897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.657 [2024-07-22 10:55:06.254117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.657 [2024-07-22 10:55:06.254125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.657 [2024-07-22 10:55:06.254133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.657 [2024-07-22 10:55:06.257649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.657 [2024-07-22 10:55:06.266744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.657 [2024-07-22 10:55:06.267458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.657 [2024-07-22 10:55:06.267495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.657 [2024-07-22 10:55:06.267506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.267743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.267963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.267972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.267979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.271495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.280596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.281272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.281308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.281319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.281566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.281787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.281795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.281807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.285315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.294412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.295059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.295096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.295106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.295343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.295572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.295588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.295596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.299104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.308193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.308784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.308819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.308830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.309067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.309288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.309296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.309304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.312820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.322118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.322641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.322677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.322688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.322925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.323145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.323153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.323161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.326678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.335982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.336671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.336711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.336722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.336959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.337180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.337188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.337195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.658 [2024-07-22 10:55:06.340723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.658 [2024-07-22 10:55:06.349815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.658 [2024-07-22 10:55:06.350494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.658 [2024-07-22 10:55:06.350531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.658 [2024-07-22 10:55:06.350542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.658 [2024-07-22 10:55:06.350779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.658 [2024-07-22 10:55:06.351000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.658 [2024-07-22 10:55:06.351008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.658 [2024-07-22 10:55:06.351016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.354538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.363641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.364233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.364251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.364258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.921 [2024-07-22 10:55:06.364482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.921 [2024-07-22 10:55:06.364699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.921 [2024-07-22 10:55:06.364707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.921 [2024-07-22 10:55:06.364714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.368216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.377547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.378206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.378242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.378253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.921 [2024-07-22 10:55:06.378498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.921 [2024-07-22 10:55:06.378724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.921 [2024-07-22 10:55:06.378733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.921 [2024-07-22 10:55:06.378740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.382247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.391339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.392032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.392068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.392079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.921 [2024-07-22 10:55:06.392316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.921 [2024-07-22 10:55:06.392545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.921 [2024-07-22 10:55:06.392554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.921 [2024-07-22 10:55:06.392562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.396070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.405163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.405838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.405874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.405885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.921 [2024-07-22 10:55:06.406122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.921 [2024-07-22 10:55:06.406343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.921 [2024-07-22 10:55:06.406352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.921 [2024-07-22 10:55:06.406359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.409879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.418969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.419531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.419568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.419578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.921 [2024-07-22 10:55:06.419815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.921 [2024-07-22 10:55:06.420035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.921 [2024-07-22 10:55:06.420044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.921 [2024-07-22 10:55:06.420051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.921 [2024-07-22 10:55:06.423572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.921 [2024-07-22 10:55:06.432871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.921 [2024-07-22 10:55:06.433496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.921 [2024-07-22 10:55:06.433532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.921 [2024-07-22 10:55:06.433544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.433782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.434003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.434011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.434018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.437534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.446642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.447332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.447369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.447382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.447627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.447848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.447857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.447864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.451375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.460483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.461171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.461208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.461219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.461464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.461685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.461694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.461702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.465209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.474307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.474992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.475028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.475043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.475280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.475508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.475517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.475525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.479034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.488128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.488679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.488715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.488728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.488966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.489186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.489196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.489203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.492720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.502023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.502782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.502821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.502832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.503069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.503290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.503298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.503307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.506827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.515925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.516521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.516558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.516571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.516811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.517031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.517046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.517054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.520572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.529876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.530630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.530667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.530678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.530916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.531137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.531146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.531153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.534760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.543680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.544219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.544255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.544266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.544518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.544740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.544749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.544756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.548262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.557580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.558134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.558152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.558160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.558377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.558600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.558609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.558616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.562259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.571363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.571998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.572035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.922 [2024-07-22 10:55:06.572046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.922 [2024-07-22 10:55:06.572284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.922 [2024-07-22 10:55:06.572512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.922 [2024-07-22 10:55:06.572521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.922 [2024-07-22 10:55:06.572529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.922 [2024-07-22 10:55:06.576040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.922 [2024-07-22 10:55:06.585166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.922 [2024-07-22 10:55:06.585851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.922 [2024-07-22 10:55:06.585888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.923 [2024-07-22 10:55:06.585899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.923 [2024-07-22 10:55:06.586137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.923 [2024-07-22 10:55:06.586357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.923 [2024-07-22 10:55:06.586366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.923 [2024-07-22 10:55:06.586373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.923 [2024-07-22 10:55:06.589889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.923 [2024-07-22 10:55:06.598990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.923 [2024-07-22 10:55:06.599640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.923 [2024-07-22 10:55:06.599677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.923 [2024-07-22 10:55:06.599688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.923 [2024-07-22 10:55:06.599926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.923 [2024-07-22 10:55:06.600146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.923 [2024-07-22 10:55:06.600154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.923 [2024-07-22 10:55:06.600162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.923 [2024-07-22 10:55:06.603682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.923 [2024-07-22 10:55:06.612779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.923 [2024-07-22 10:55:06.613465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.923 [2024-07-22 10:55:06.613502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:00.923 [2024-07-22 10:55:06.613513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:00.923 [2024-07-22 10:55:06.613754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:00.923 [2024-07-22 10:55:06.613975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.923 [2024-07-22 10:55:06.613983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.923 [2024-07-22 10:55:06.613990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.923 [2024-07-22 10:55:06.617508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.184 [2024-07-22 10:55:06.626603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.184 [2024-07-22 10:55:06.627197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.184 [2024-07-22 10:55:06.627215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.184 [2024-07-22 10:55:06.627223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.184 [2024-07-22 10:55:06.627447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.184 [2024-07-22 10:55:06.627664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.184 [2024-07-22 10:55:06.627672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.184 [2024-07-22 10:55:06.627679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.184 [2024-07-22 10:55:06.631183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.184 [2024-07-22 10:55:06.640495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.184 [2024-07-22 10:55:06.641156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.184 [2024-07-22 10:55:06.641192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.184 [2024-07-22 10:55:06.641203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.184 [2024-07-22 10:55:06.641447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.184 [2024-07-22 10:55:06.641668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.184 [2024-07-22 10:55:06.641677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.184 [2024-07-22 10:55:06.641684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.184 [2024-07-22 10:55:06.645193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.184 [2024-07-22 10:55:06.654294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.184 [2024-07-22 10:55:06.654859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.184 [2024-07-22 10:55:06.654877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.184 [2024-07-22 10:55:06.654885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.184 [2024-07-22 10:55:06.655102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.184 [2024-07-22 10:55:06.655319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.184 [2024-07-22 10:55:06.655326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.184 [2024-07-22 10:55:06.655338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.184 [2024-07-22 10:55:06.658850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.184 [2024-07-22 10:55:06.668153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.184 [2024-07-22 10:55:06.668816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.184 [2024-07-22 10:55:06.668853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.184 [2024-07-22 10:55:06.668864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.669101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.669322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.669331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.669338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.672858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.681953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.682527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.682564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.682576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.682817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.683038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.683046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.683054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.686571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.695870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.696503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.696539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.696552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.696793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.697014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.697022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.697029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.700545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.709638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.710282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.710318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.710329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.710573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.710794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.710803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.710810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.714318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.723412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.723857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.723877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.723885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.724103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.724320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.724328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.724335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.727846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.737347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.737912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.737928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.737935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.738151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.738367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.738375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.738382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.741896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.751190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.751755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.751770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.751777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.751993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.752213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.752221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.752228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.755741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.765041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.765553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.765571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.765578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.765796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.766012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.766020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.766027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.769535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.778834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.779413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.779428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.779436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.779652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.779869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.779877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.779884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.783385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.792714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.793259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.793274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.793282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.793504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.793721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.793728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.793735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.797242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.806543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.807105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.807120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.185 [2024-07-22 10:55:06.807127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.185 [2024-07-22 10:55:06.807344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.185 [2024-07-22 10:55:06.807565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.185 [2024-07-22 10:55:06.807574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.185 [2024-07-22 10:55:06.807580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.185 [2024-07-22 10:55:06.811084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.185 [2024-07-22 10:55:06.820375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.185 [2024-07-22 10:55:06.821040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.185 [2024-07-22 10:55:06.821076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.186 [2024-07-22 10:55:06.821088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.186 [2024-07-22 10:55:06.821325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.186 [2024-07-22 10:55:06.821552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.186 [2024-07-22 10:55:06.821561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.186 [2024-07-22 10:55:06.821569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.186 [2024-07-22 10:55:06.825078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.186 [2024-07-22 10:55:06.834170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.186 [2024-07-22 10:55:06.834739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.186 [2024-07-22 10:55:06.834758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.186 [2024-07-22 10:55:06.834766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.186 [2024-07-22 10:55:06.834984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.186 [2024-07-22 10:55:06.835200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.186 [2024-07-22 10:55:06.835207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.186 [2024-07-22 10:55:06.835214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.186 [2024-07-22 10:55:06.838733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.186 [2024-07-22 10:55:06.848038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.186 [2024-07-22 10:55:06.848586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.186 [2024-07-22 10:55:06.848603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.186 [2024-07-22 10:55:06.848614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.186 [2024-07-22 10:55:06.848831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.186 [2024-07-22 10:55:06.849049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.186 [2024-07-22 10:55:06.849056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.186 [2024-07-22 10:55:06.849063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.186 [2024-07-22 10:55:06.852570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.186 [2024-07-22 10:55:06.861872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.186 [2024-07-22 10:55:06.862495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.186 [2024-07-22 10:55:06.862531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.186 [2024-07-22 10:55:06.862543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.186 [2024-07-22 10:55:06.862782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.186 [2024-07-22 10:55:06.863003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.186 [2024-07-22 10:55:06.863011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.186 [2024-07-22 10:55:06.863019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.186 [2024-07-22 10:55:06.866536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.186 [2024-07-22 10:55:06.875629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.186 [2024-07-22 10:55:06.876227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.186 [2024-07-22 10:55:06.876245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.186 [2024-07-22 10:55:06.876253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.186 [2024-07-22 10:55:06.876477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.186 [2024-07-22 10:55:06.876694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.186 [2024-07-22 10:55:06.876702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.186 [2024-07-22 10:55:06.876709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.186 [2024-07-22 10:55:06.880210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.447 [2024-07-22 10:55:06.889512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.447 [2024-07-22 10:55:06.889937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.447 [2024-07-22 10:55:06.889955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.447 [2024-07-22 10:55:06.889963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.447 [2024-07-22 10:55:06.890180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.447 [2024-07-22 10:55:06.890410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.447 [2024-07-22 10:55:06.890419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.447 [2024-07-22 10:55:06.890426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.447 [2024-07-22 10:55:06.893928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.447 [2024-07-22 10:55:06.903432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.447 [2024-07-22 10:55:06.903963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.447 [2024-07-22 10:55:06.903978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.447 [2024-07-22 10:55:06.903986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.447 [2024-07-22 10:55:06.904203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.447 [2024-07-22 10:55:06.904424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.447 [2024-07-22 10:55:06.904432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.447 [2024-07-22 10:55:06.904439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.447 [2024-07-22 10:55:06.907941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.447 [2024-07-22 10:55:06.917234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.447 [2024-07-22 10:55:06.917933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.447 [2024-07-22 10:55:06.917969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.447 [2024-07-22 10:55:06.917980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.447 [2024-07-22 10:55:06.918217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.447 [2024-07-22 10:55:06.918445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.918454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.918462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.921968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:06.931056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:06.931524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:06.931542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:06.931550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:06.931768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:06.931985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.931993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.932000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.935506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:06.944818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:06.945369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:06.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:06.945392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:06.945614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:06.945830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.945839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.945845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.949346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:06.958650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:06.959320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:06.959356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:06.959369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:06.959619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:06.959841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.959849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.959857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.963365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:06.972465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:06.973063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:06.973099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:06.973110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:06.973347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:06.973575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.973584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.973592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.977100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:06.986407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:06.987090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:06.987126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:06.987141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:06.987379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:06.987608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:06.987617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:06.987625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:06.991134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:07.000230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:07.000844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:07.000864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:07.000872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:07.001090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:07.001307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:07.001314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:07.001322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:07.004828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:07.014125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:07.014667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:07.014683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:07.014691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:07.014907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:07.015123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:07.015131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:07.015138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:07.018643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:07.027940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:07.028596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:07.028633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:07.028644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:07.028881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:07.029101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:07.029115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:07.029122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:07.032640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.448 [2024-07-22 10:55:07.041749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.448 [2024-07-22 10:55:07.042394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.448 [2024-07-22 10:55:07.042437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.448 [2024-07-22 10:55:07.042448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.448 [2024-07-22 10:55:07.042685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.448 [2024-07-22 10:55:07.042905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.448 [2024-07-22 10:55:07.042914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.448 [2024-07-22 10:55:07.042921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.448 [2024-07-22 10:55:07.046434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.055534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.056212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.056248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.056259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.056504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.056725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.056733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.056741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.060247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.069342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.070010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.070046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.070057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.070295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.070523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.070532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.070540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.074048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.083143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.083811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.083848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.083858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.084096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.084317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.084325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.084332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.087848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.096945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.097521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.097557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.097569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.097809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.098030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.098038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.098046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.101560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.110863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.111440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.111465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.111473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.111696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.111914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.111922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.111929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.115441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.124739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.125410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.125447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.125459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.125704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.125925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.125933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.125940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.129454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.449 [2024-07-22 10:55:07.138600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.449 [2024-07-22 10:55:07.139286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.449 [2024-07-22 10:55:07.139322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.449 [2024-07-22 10:55:07.139333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.449 [2024-07-22 10:55:07.139579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.449 [2024-07-22 10:55:07.139800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.449 [2024-07-22 10:55:07.139809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.449 [2024-07-22 10:55:07.139816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.449 [2024-07-22 10:55:07.143324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.710 [2024-07-22 10:55:07.152424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.710 [2024-07-22 10:55:07.153065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.710 [2024-07-22 10:55:07.153102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.710 [2024-07-22 10:55:07.153112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.710 [2024-07-22 10:55:07.153350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.710 [2024-07-22 10:55:07.153579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.710 [2024-07-22 10:55:07.153588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.710 [2024-07-22 10:55:07.153595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.710 [2024-07-22 10:55:07.157108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.710 [2024-07-22 10:55:07.166206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.710 [2024-07-22 10:55:07.166885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.710 [2024-07-22 10:55:07.166922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.710 [2024-07-22 10:55:07.166932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.710 [2024-07-22 10:55:07.167170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.710 [2024-07-22 10:55:07.167391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.710 [2024-07-22 10:55:07.167408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.710 [2024-07-22 10:55:07.167420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.710 [2024-07-22 10:55:07.170929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.710 [2024-07-22 10:55:07.180022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.710 [2024-07-22 10:55:07.180603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.710 [2024-07-22 10:55:07.180622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.710 [2024-07-22 10:55:07.180630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.710 [2024-07-22 10:55:07.180848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.710 [2024-07-22 10:55:07.181065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.710 [2024-07-22 10:55:07.181073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.710 [2024-07-22 10:55:07.181080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.710 [2024-07-22 10:55:07.184586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.710 [2024-07-22 10:55:07.193879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.710 [2024-07-22 10:55:07.194514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.710 [2024-07-22 10:55:07.194550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.710 [2024-07-22 10:55:07.194563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.710 [2024-07-22 10:55:07.194801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.710 [2024-07-22 10:55:07.195022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.195030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.195037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.198556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.207661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.208218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.208236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.208244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.208489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.208708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.208716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.208723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.212228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.221521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.222190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.222230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.222241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.222487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.222708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.222716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.222724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.226230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.235323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.236031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.236067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.236077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.236315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.236544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.236560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.236568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.240091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.249184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.249827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.249863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.249874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.250111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.250331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.250340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.250347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.253869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.262968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.263659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.263696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.263707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.263944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.264169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.264177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.264185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.267701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.276790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.277497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.277535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.277547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.277784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.278004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.278012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.278020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.281535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.290630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.291310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.291346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.291358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.291604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.291825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.291834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.291842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.295348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.304444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.305086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.305122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.305133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.305371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.305599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.305608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.305616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.309130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.318231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.318922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.318958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.318969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.319206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.319435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.319444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.319451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.322963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.332060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.332728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.332764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.332774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.333012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.711 [2024-07-22 10:55:07.333233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.711 [2024-07-22 10:55:07.333241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.711 [2024-07-22 10:55:07.333248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.711 [2024-07-22 10:55:07.336763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.711 [2024-07-22 10:55:07.345873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.711 [2024-07-22 10:55:07.346546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.711 [2024-07-22 10:55:07.346582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.711 [2024-07-22 10:55:07.346593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.711 [2024-07-22 10:55:07.346831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.712 [2024-07-22 10:55:07.347051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.712 [2024-07-22 10:55:07.347060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.712 [2024-07-22 10:55:07.347067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.712 [2024-07-22 10:55:07.350582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.712 [2024-07-22 10:55:07.359679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.712 [2024-07-22 10:55:07.360366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.712 [2024-07-22 10:55:07.360409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.712 [2024-07-22 10:55:07.360425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.712 [2024-07-22 10:55:07.360663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.712 [2024-07-22 10:55:07.360884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.712 [2024-07-22 10:55:07.360892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.712 [2024-07-22 10:55:07.360899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.712 [2024-07-22 10:55:07.364408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.712 [2024-07-22 10:55:07.373495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.712 [2024-07-22 10:55:07.374161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.712 [2024-07-22 10:55:07.374197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.712 [2024-07-22 10:55:07.374208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.712 [2024-07-22 10:55:07.374455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.712 [2024-07-22 10:55:07.374677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.712 [2024-07-22 10:55:07.374685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.712 [2024-07-22 10:55:07.374693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.712 [2024-07-22 10:55:07.378198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.712 [2024-07-22 10:55:07.387292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.712 [2024-07-22 10:55:07.387895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.712 [2024-07-22 10:55:07.387913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.712 [2024-07-22 10:55:07.387921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.712 [2024-07-22 10:55:07.388138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.712 [2024-07-22 10:55:07.388355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.712 [2024-07-22 10:55:07.388363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.712 [2024-07-22 10:55:07.388370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.712 [2024-07-22 10:55:07.391913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.712 [2024-07-22 10:55:07.401206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.712 [2024-07-22 10:55:07.401760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.712 [2024-07-22 10:55:07.401776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.712 [2024-07-22 10:55:07.401784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.712 [2024-07-22 10:55:07.402001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.712 [2024-07-22 10:55:07.402217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.712 [2024-07-22 10:55:07.402229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.712 [2024-07-22 10:55:07.402236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.712 [2024-07-22 10:55:07.405748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.972 [2024-07-22 10:55:07.415050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.972 [2024-07-22 10:55:07.415741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.972 [2024-07-22 10:55:07.415777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.972 [2024-07-22 10:55:07.415788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.972 [2024-07-22 10:55:07.416025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.972 [2024-07-22 10:55:07.416246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.972 [2024-07-22 10:55:07.416254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.972 [2024-07-22 10:55:07.416262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.972 [2024-07-22 10:55:07.419779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.972 [2024-07-22 10:55:07.428865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.972 [2024-07-22 10:55:07.429563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.972 [2024-07-22 10:55:07.429599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.972 [2024-07-22 10:55:07.429611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.972 [2024-07-22 10:55:07.429848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.972 [2024-07-22 10:55:07.430068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.972 [2024-07-22 10:55:07.430076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.972 [2024-07-22 10:55:07.430084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.972 [2024-07-22 10:55:07.433598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.972 [2024-07-22 10:55:07.442695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.972 [2024-07-22 10:55:07.443390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.972 [2024-07-22 10:55:07.443433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.972 [2024-07-22 10:55:07.443444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.972 [2024-07-22 10:55:07.443681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.972 [2024-07-22 10:55:07.443902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.972 [2024-07-22 10:55:07.443910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.972 [2024-07-22 10:55:07.443917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.972 [2024-07-22 10:55:07.447429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.972 [2024-07-22 10:55:07.456532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.972 [2024-07-22 10:55:07.457215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.972 [2024-07-22 10:55:07.457251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.972 [2024-07-22 10:55:07.457262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.972 [2024-07-22 10:55:07.457509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.972 [2024-07-22 10:55:07.457730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.972 [2024-07-22 10:55:07.457738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.972 [2024-07-22 10:55:07.457746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.972 [2024-07-22 10:55:07.461254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.972 [2024-07-22 10:55:07.470341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.471022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.471058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.471069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.471306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.471536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.471546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.471553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.475060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.484149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.484833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.484869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.484880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.485117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.485338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.485346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.485354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.488869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.497958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.498663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.498699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.498714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.498952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.499173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.499181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.499188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.502704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.511792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.512352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.512388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.512408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.512645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.512865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.512874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.512881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.516387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.525690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.526311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.526347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.526360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.526610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.526832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.526840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.526848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.530357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.539473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.540066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.540084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.540092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.540310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.540533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.540552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.540559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.544070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.553370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.554000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.554039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.554055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.554296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.554525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.554534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.554541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.558050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.567231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.567798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.567817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.567825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.568043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.568259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.568267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.568274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.571778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.581063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.581734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.581770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.581781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.582018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.582239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.582247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.582254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.585929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.594826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.595450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.595474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.595483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.595706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.595923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.595931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.595938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.599449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.973 [2024-07-22 10:55:07.608735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.973 [2024-07-22 10:55:07.609422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.973 [2024-07-22 10:55:07.609458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.973 [2024-07-22 10:55:07.609469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.973 [2024-07-22 10:55:07.609706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.973 [2024-07-22 10:55:07.609926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.973 [2024-07-22 10:55:07.609935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.973 [2024-07-22 10:55:07.609942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.973 [2024-07-22 10:55:07.613459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.974 [2024-07-22 10:55:07.622544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.974 [2024-07-22 10:55:07.623140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.974 [2024-07-22 10:55:07.623158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.974 [2024-07-22 10:55:07.623165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.974 [2024-07-22 10:55:07.623382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.974 [2024-07-22 10:55:07.623630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.974 [2024-07-22 10:55:07.623641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.974 [2024-07-22 10:55:07.623648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.974 [2024-07-22 10:55:07.627151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.974 [2024-07-22 10:55:07.636439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.974 [2024-07-22 10:55:07.636982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.974 [2024-07-22 10:55:07.637018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.974 [2024-07-22 10:55:07.637029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.974 [2024-07-22 10:55:07.637271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.974 [2024-07-22 10:55:07.637504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.974 [2024-07-22 10:55:07.637514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.974 [2024-07-22 10:55:07.637522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.974 [2024-07-22 10:55:07.641040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.974 [2024-07-22 10:55:07.650332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.974 [2024-07-22 10:55:07.650983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.974 [2024-07-22 10:55:07.651020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.974 [2024-07-22 10:55:07.651030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.974 [2024-07-22 10:55:07.651267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.974 [2024-07-22 10:55:07.651496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.974 [2024-07-22 10:55:07.651506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.974 [2024-07-22 10:55:07.651513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.974 [2024-07-22 10:55:07.655026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.974 [2024-07-22 10:55:07.664122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.974 [2024-07-22 10:55:07.664716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.974 [2024-07-22 10:55:07.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:01.974 [2024-07-22 10:55:07.664763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:01.974 [2024-07-22 10:55:07.665001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:01.974 [2024-07-22 10:55:07.665221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.974 [2024-07-22 10:55:07.665229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.974 [2024-07-22 10:55:07.665236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.974 [2024-07-22 10:55:07.668752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.678054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.678738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.678775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.678786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.679023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.679243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.679252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.679263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.682780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.691866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.692487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.692523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.692533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.692771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.692991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.692999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.693007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.696523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.705609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.706289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.706326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.706336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.706582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.706804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.706812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.706819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.710324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.719432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.720110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.720146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.720157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.720403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.720625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.720633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.720640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.724149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.733237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.733897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.733937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.733948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.734185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.734414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.734424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.734432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.737939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.747037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.747751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.747787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.747798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.748035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.235 [2024-07-22 10:55:07.748255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.235 [2024-07-22 10:55:07.748264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.235 [2024-07-22 10:55:07.748271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.235 [2024-07-22 10:55:07.751787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.235 [2024-07-22 10:55:07.760877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.235 [2024-07-22 10:55:07.761635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.235 [2024-07-22 10:55:07.761672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.235 [2024-07-22 10:55:07.761683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.235 [2024-07-22 10:55:07.761920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.762141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.762150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.762157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.765672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.774761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.775352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.775370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.775379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.775603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.775825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.775834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.775841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.779348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.788661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.789363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.789406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.789418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.789656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.789877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.789885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.789892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.793401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.802491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.803168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.803204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.803216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.803467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.803689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.803697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.803704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.807213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.816310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.816868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.816886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.816894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.817111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.817327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.817335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.817342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.820857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.830156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.830713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.830729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.830736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.830954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.831170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.831177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.831184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.834720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.844035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.844705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.844741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.844752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.844989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.845210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.845220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.845227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.848755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.857853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.858544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.858580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.858591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.858828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.859048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.859056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.859063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.862575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.871672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.872273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.872309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.872324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.872571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.872792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.872800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.872807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.876312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.885610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.886159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.886177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.886185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.886408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.886626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.886633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.886640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.890140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.899427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.900053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.900089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.900099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.236 [2024-07-22 10:55:07.900336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.236 [2024-07-22 10:55:07.900566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.236 [2024-07-22 10:55:07.900575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.236 [2024-07-22 10:55:07.900583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.236 [2024-07-22 10:55:07.904089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.236 [2024-07-22 10:55:07.913177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.236 [2024-07-22 10:55:07.913837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.236 [2024-07-22 10:55:07.913873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.236 [2024-07-22 10:55:07.913884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.237 [2024-07-22 10:55:07.914121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.237 [2024-07-22 10:55:07.914341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.237 [2024-07-22 10:55:07.914354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.237 [2024-07-22 10:55:07.914362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.237 [2024-07-22 10:55:07.917879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.237 [2024-07-22 10:55:07.926967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.237 [2024-07-22 10:55:07.927659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.237 [2024-07-22 10:55:07.927696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.237 [2024-07-22 10:55:07.927707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.237 [2024-07-22 10:55:07.927944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.237 [2024-07-22 10:55:07.928165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.237 [2024-07-22 10:55:07.928173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.237 [2024-07-22 10:55:07.928180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.237 [2024-07-22 10:55:07.931699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.497 [2024-07-22 10:55:07.940805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.497 [2024-07-22 10:55:07.941509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.497 [2024-07-22 10:55:07.941546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.497 [2024-07-22 10:55:07.941556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.497 [2024-07-22 10:55:07.941793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.497 [2024-07-22 10:55:07.942014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.497 [2024-07-22 10:55:07.942023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.497 [2024-07-22 10:55:07.942030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:07.945546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:07.954639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:07.955300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:07.955336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:07.955348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:07.955597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:07.955818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:07.955827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:07.955834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:07.959340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:07.968435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:07.969085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:07.969121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:07.969131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:07.969368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:07.969599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:07.969607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:07.969615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:07.973124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:07.982211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:07.982897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:07.982934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:07.982945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:07.983182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:07.983412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:07.983421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:07.983429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:07.986937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:07.996026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:07.996713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:07.996749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:07.996760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:07.996998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:07.997218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:07.997226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:07.997233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.000749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.009834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.010455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.010492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.010504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.010747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.010968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.010977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.010985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.014500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.023587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.024267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.024314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.024560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.024781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.024790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.024797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.028303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.037393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.037974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.037993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.038001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.038218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.038450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.038458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.038465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.042001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.051305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.051893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.051910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.051917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.052134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.052351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.052359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.052370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.055881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.065175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.065829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.065866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.065877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.066114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.066336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.066344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.066352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.069867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.078953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.079635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.079672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.079682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.079919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.080140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.498 [2024-07-22 10:55:08.080148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.498 [2024-07-22 10:55:08.080155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.498 [2024-07-22 10:55:08.083672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.498 [2024-07-22 10:55:08.092758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.498 [2024-07-22 10:55:08.093416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.498 [2024-07-22 10:55:08.093453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.498 [2024-07-22 10:55:08.093465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.498 [2024-07-22 10:55:08.093705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.498 [2024-07-22 10:55:08.093926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.093935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.093943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.097459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.106545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.107136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.107153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.107161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.107378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.107602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.107610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.107617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.111118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.120405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.121065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.121101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.121112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.121349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.121579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.121588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.121596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.125105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.134190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.134749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.134766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.134774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.134992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.135208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.135216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.135223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.138736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.148026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.148690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.148726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.148737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.148978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.149198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.149206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.149214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.152728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.161824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.162282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.162300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.162308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.162532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.162749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.162756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.162763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.166263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.175758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.176334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.176349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.176357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.176578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.176795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.176803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.176810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.180310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.499 [2024-07-22 10:55:08.189618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.499 [2024-07-22 10:55:08.190092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.499 [2024-07-22 10:55:08.190107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.499 [2024-07-22 10:55:08.190114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.499 [2024-07-22 10:55:08.190331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.499 [2024-07-22 10:55:08.190554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.499 [2024-07-22 10:55:08.190562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.499 [2024-07-22 10:55:08.190572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.499 [2024-07-22 10:55:08.194079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.203386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.203919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.760 [2024-07-22 10:55:08.203934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.760 [2024-07-22 10:55:08.203942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.760 [2024-07-22 10:55:08.204158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.760 [2024-07-22 10:55:08.204374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.760 [2024-07-22 10:55:08.204383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.760 [2024-07-22 10:55:08.204390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.760 [2024-07-22 10:55:08.207900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.217205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.217763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.760 [2024-07-22 10:55:08.217778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.760 [2024-07-22 10:55:08.217785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.760 [2024-07-22 10:55:08.218002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.760 [2024-07-22 10:55:08.218218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.760 [2024-07-22 10:55:08.218227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.760 [2024-07-22 10:55:08.218234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.760 [2024-07-22 10:55:08.221744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.231063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.231739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.760 [2024-07-22 10:55:08.231776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.760 [2024-07-22 10:55:08.231787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.760 [2024-07-22 10:55:08.232025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.760 [2024-07-22 10:55:08.232246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.760 [2024-07-22 10:55:08.232254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.760 [2024-07-22 10:55:08.232261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.760 [2024-07-22 10:55:08.235783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.244901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.245487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.760 [2024-07-22 10:55:08.245510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.760 [2024-07-22 10:55:08.245520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.760 [2024-07-22 10:55:08.245737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.760 [2024-07-22 10:55:08.245954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.760 [2024-07-22 10:55:08.245961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.760 [2024-07-22 10:55:08.245968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.760 [2024-07-22 10:55:08.249510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.258831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.259387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.760 [2024-07-22 10:55:08.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.760 [2024-07-22 10:55:08.259417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.760 [2024-07-22 10:55:08.259634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.760 [2024-07-22 10:55:08.259851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.760 [2024-07-22 10:55:08.259860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.760 [2024-07-22 10:55:08.259866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.760 [2024-07-22 10:55:08.263372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.760 [2024-07-22 10:55:08.272682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.760 [2024-07-22 10:55:08.273247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.273263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.273270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.273493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.273709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.273717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.273724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.277227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.286563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.287149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.287164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.287171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.287388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.287645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.287654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.287662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.291168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.300480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.301135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.301171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.301184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.301433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.301654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.301662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.301669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.305181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.314292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.314864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.314883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.314892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.315109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.315326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.315334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.315340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.318858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.328164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.328716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.328732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.328740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.328957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.329173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.329181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.329188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.332705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.342029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.342488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.342504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.342512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.342728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.342944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.342952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.342959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.346468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.355809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.356352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.356367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.356375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.356598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.356815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.356823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.356830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.360336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.369657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.370134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.370171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.370182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.370430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.370653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.370661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.370669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.374180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.383499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.384103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.384140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.384154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.384392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.384622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.384631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.384639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.388151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.397254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.397814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.397833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.397841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.398059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.761 [2024-07-22 10:55:08.398275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.761 [2024-07-22 10:55:08.398283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.761 [2024-07-22 10:55:08.398290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.761 [2024-07-22 10:55:08.401801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.761 [2024-07-22 10:55:08.411110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.761 [2024-07-22 10:55:08.411676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.761 [2024-07-22 10:55:08.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.761 [2024-07-22 10:55:08.411700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.761 [2024-07-22 10:55:08.411917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.762 [2024-07-22 10:55:08.412134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.762 [2024-07-22 10:55:08.412141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.762 [2024-07-22 10:55:08.412148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.762 [2024-07-22 10:55:08.415659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.762 [2024-07-22 10:55:08.424959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.762 [2024-07-22 10:55:08.425440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.762 [2024-07-22 10:55:08.425456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.762 [2024-07-22 10:55:08.425463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.762 [2024-07-22 10:55:08.425680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.762 [2024-07-22 10:55:08.425897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.762 [2024-07-22 10:55:08.425909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.762 [2024-07-22 10:55:08.425916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.762 [2024-07-22 10:55:08.429424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.762 [2024-07-22 10:55:08.438736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.762 [2024-07-22 10:55:08.439359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.762 [2024-07-22 10:55:08.439405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.762 [2024-07-22 10:55:08.439418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.762 [2024-07-22 10:55:08.439657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.762 [2024-07-22 10:55:08.439878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.762 [2024-07-22 10:55:08.439887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.762 [2024-07-22 10:55:08.439894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.762 [2024-07-22 10:55:08.443404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:02.762 [2024-07-22 10:55:08.452501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:02.762 [2024-07-22 10:55:08.453152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.762 [2024-07-22 10:55:08.453188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:02.762 [2024-07-22 10:55:08.453198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:02.762 [2024-07-22 10:55:08.453443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:02.762 [2024-07-22 10:55:08.453665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:02.762 [2024-07-22 10:55:08.453673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:02.762 [2024-07-22 10:55:08.453681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:02.762 [2024-07-22 10:55:08.457225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.466328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.466916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.466942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.467159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.467375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.467384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.467391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.470898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.480199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.480859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.480896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.480907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.481144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.481365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.481374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.481382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.484899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.493999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.494492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.494511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.494519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.494736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2256900 Killed "${NVMF_APP[@]}" "$@" 00:39:03.023 [2024-07-22 10:55:08.494953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.494961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.494968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.023 [2024-07-22 10:55:08.498478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2258454 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2258454 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2258454 ']' 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:03.023 10:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.023 [2024-07-22 10:55:08.507778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.508428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.508465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.508477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.508717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.508938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.508947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.508955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.512469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.521571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.522218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.522265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.522509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.522731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.522740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.522748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.526256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.535355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.535958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.535977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.535985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.536202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.536425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.536433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.536441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.539959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.549102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.549709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.549727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.549735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.549958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.550174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.550182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.550189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.553330] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:39:03.023 [2024-07-22 10:55:08.553374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:03.023 [2024-07-22 10:55:08.553697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.563001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.563564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.563581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.563589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.563805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.564022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.564030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.564037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.567546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.576842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.577435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.577452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.577460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.577677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.577893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.577900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.577907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.581413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 EAL: No free 2048 kB hugepages reported on node 1 00:39:03.023 [2024-07-22 10:55:08.590713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.591375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.591419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.591432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.591675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.591897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.591906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.591913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.595427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.604615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.605311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.605348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.605360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.605610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.605831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.605840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.605847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.609355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.618453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.618999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.619016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.619025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.619243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.619468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.619477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.619484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.622989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.632286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.632970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.633006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.633018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.633255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.633483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.633493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.633505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.637013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.639966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:03.023 [2024-07-22 10:55:08.646131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.646746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.646765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.646773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.646992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.647209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.647217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.647225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.650738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.660060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.023 [2024-07-22 10:55:08.660789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.023 [2024-07-22 10:55:08.660830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.023 [2024-07-22 10:55:08.660842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.023 [2024-07-22 10:55:08.661086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.023 [2024-07-22 10:55:08.661308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.023 [2024-07-22 10:55:08.661316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.023 [2024-07-22 10:55:08.661325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.023 [2024-07-22 10:55:08.664890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.023 [2024-07-22 10:55:08.668835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.023 [2024-07-22 10:55:08.668862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.024 [2024-07-22 10:55:08.668870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:03.024 [2024-07-22 10:55:08.668877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:03.024 [2024-07-22 10:55:08.668883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.024 [2024-07-22 10:55:08.668992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:03.024 [2024-07-22 10:55:08.669149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.024 [2024-07-22 10:55:08.669151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.024 [2024-07-22 10:55:08.674003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.024 [2024-07-22 10:55:08.674709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.024 [2024-07-22 10:55:08.674748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.024 [2024-07-22 10:55:08.674760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.024 [2024-07-22 10:55:08.675008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.024 [2024-07-22 10:55:08.675231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.024 [2024-07-22 10:55:08.675239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.024 [2024-07-22 10:55:08.675247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.024 [2024-07-22 10:55:08.678766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.024 [2024-07-22 10:55:08.687868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.024 [2024-07-22 10:55:08.688532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.024 [2024-07-22 10:55:08.688572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.024 [2024-07-22 10:55:08.688585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.024 [2024-07-22 10:55:08.688827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.024 [2024-07-22 10:55:08.689050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.024 [2024-07-22 10:55:08.689058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.024 [2024-07-22 10:55:08.689067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.024 [2024-07-22 10:55:08.692586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.024 [2024-07-22 10:55:08.701681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.024 [2024-07-22 10:55:08.702270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.024 [2024-07-22 10:55:08.702290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.024 [2024-07-22 10:55:08.702299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.024 [2024-07-22 10:55:08.702523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.024 [2024-07-22 10:55:08.702741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.024 [2024-07-22 10:55:08.702749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.024 [2024-07-22 10:55:08.702757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.024 [2024-07-22 10:55:08.706259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.024 [2024-07-22 10:55:08.715561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.024 [2024-07-22 10:55:08.716117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.024 [2024-07-22 10:55:08.716155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.024 [2024-07-22 10:55:08.716169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.024 [2024-07-22 10:55:08.716419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.024 [2024-07-22 10:55:08.716641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.024 [2024-07-22 10:55:08.716650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.024 [2024-07-22 10:55:08.716663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.024 [2024-07-22 10:55:08.720170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.729481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.729913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.729931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.729939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.730157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.730373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.730381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.730388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.733897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.743419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.744110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.744147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.744158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.744405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.744627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.744636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.744644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.748155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.757482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.758145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.758182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.758194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.758440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.758662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.758671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.758679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.762186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.771284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.771941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.771977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.771989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.772226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.772454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.772463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.772471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.775979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.785076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.785765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.785803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.785814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.786051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.786272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.786281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.786288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.789806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.798904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.799510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.799547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.799559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.799799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.284 [2024-07-22 10:55:08.800020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.284 [2024-07-22 10:55:08.800028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.284 [2024-07-22 10:55:08.800036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.284 [2024-07-22 10:55:08.803555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.284 [2024-07-22 10:55:08.812653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.284 [2024-07-22 10:55:08.813356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.284 [2024-07-22 10:55:08.813394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.284 [2024-07-22 10:55:08.813414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.284 [2024-07-22 10:55:08.813653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.813879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.813888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.813896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.817410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.826504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.827054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.827072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.827080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.827298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.827522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.827531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.827538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.831041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.840352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.841040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.841078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.841089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.841326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.841555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.841564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.841572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.845083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.854187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.854800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.854818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.854826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.855049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.855268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.855276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.855284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.858804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.868114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.868812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.868860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.869098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.869318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.869327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.869335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.872894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.881998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.882570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.882589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.882598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.882816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.883033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.883041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.883049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.886557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.895856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.896407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.896424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.896432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.896649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.896865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.896873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.896881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.900382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.909684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.910376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.910420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.910437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.910676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.910897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.910906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.910913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.914425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.923523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.924109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.924146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.924158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.924403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.924625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.924634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.924641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.928149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.937455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.938014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.938032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.938040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.938257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.938490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.938499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.938507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.942009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.951312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.952026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.952062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.952073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.952311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.952546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.952556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.952564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.956078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.965180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.965901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.965949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.966186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.966415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.966424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.966431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.285 [2024-07-22 10:55:08.969940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.285 [2024-07-22 10:55:08.979030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.285 [2024-07-22 10:55:08.979597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.285 [2024-07-22 10:55:08.979616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.285 [2024-07-22 10:55:08.979624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.285 [2024-07-22 10:55:08.979842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.285 [2024-07-22 10:55:08.980058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.285 [2024-07-22 10:55:08.980066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.285 [2024-07-22 10:55:08.980073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:08.983580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:08.992877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:08.993476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:08.993514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:08.993526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:08.993765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:08.993986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:08.993995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:08.994002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:08.997516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.006821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.007519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.007556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.007568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.007807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.008028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.008036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.008045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.011561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.020657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.021207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.021243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.021254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.021501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.021723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.021732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.021740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.025250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.034551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.035147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.035164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.035172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.035390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.035613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.035621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.035629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.039139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.048440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.048990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.049005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.049017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.049235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.049458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.049466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.049474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.052975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.062274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.062869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.062885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.062893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.063109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.063326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.063334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.063341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.066845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.076138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.076783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.076820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.076832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.077071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.077292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.077301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.077308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.080866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.089972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.090702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.090751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.090989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.091210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.091223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.091231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.094744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.103841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.104540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.104576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.104589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.104827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.105048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.105057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.105065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.108581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.117674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.118237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.118255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.118264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.118487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.547 [2024-07-22 10:55:09.118705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.547 [2024-07-22 10:55:09.118713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.547 [2024-07-22 10:55:09.118720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.547 [2024-07-22 10:55:09.122221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.547 [2024-07-22 10:55:09.131518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.547 [2024-07-22 10:55:09.131938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.547 [2024-07-22 10:55:09.131953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.547 [2024-07-22 10:55:09.131960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.547 [2024-07-22 10:55:09.132177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.132393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.132407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.132414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.135915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.145431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.145865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.145880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.145889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.146105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.146322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.146329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.146336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.149842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.159350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.159942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.159958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.159966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.160182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.160403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.160411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.160418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.163920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.173217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.173779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.173816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.173829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.174068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.174289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.174298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.174305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.177820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.187166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.187742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.187779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.187790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.188032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.188254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.188263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.188271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.191787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.201091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.201406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.201427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.201435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.201653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.201872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.201880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.201887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.205391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.214894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.215457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.215480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.215489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.215711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.215929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.215937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.215944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.219453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.228746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.229447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.229483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.229496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.229735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.229956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.229965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.229977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.548 [2024-07-22 10:55:09.233491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.548 [2024-07-22 10:55:09.242595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.548 [2024-07-22 10:55:09.243305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.548 [2024-07-22 10:55:09.243342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.548 [2024-07-22 10:55:09.243355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.548 [2024-07-22 10:55:09.243602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.548 [2024-07-22 10:55:09.243824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.548 [2024-07-22 10:55:09.243832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.548 [2024-07-22 10:55:09.243840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.247349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.256454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.257147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.257184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.257195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.257441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.257663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.257672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.257680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.261186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.270278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.270980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.271017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.271028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.271267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.271497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.271506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.271514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.275022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.284120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.284813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.284853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.284865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.285103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.285324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.285333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.285341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.288886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.297991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.298699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.298736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.298748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.298986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.299206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.299215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.299222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.302737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.311830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.312538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.312575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.312586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.312824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.313044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.313052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.313060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.316576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:03.808 [2024-07-22 10:55:09.325668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:39:03.808 10:55:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:03.808 [2024-07-22 10:55:09.326358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.326402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.326419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:03.808 [2024-07-22 10:55:09.326658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.808 [2024-07-22 10:55:09.326879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.808 [2024-07-22 10:55:09.326888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.808 [2024-07-22 10:55:09.326896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.808 [2024-07-22 10:55:09.330407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.808 [2024-07-22 10:55:09.339503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.808 [2024-07-22 10:55:09.340053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.808 [2024-07-22 10:55:09.340071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.808 [2024-07-22 10:55:09.340080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.808 [2024-07-22 10:55:09.340297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.808 [2024-07-22 10:55:09.340529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.340540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.340548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.344052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 [2024-07-22 10:55:09.353356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.354057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.354105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.354343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.354571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.354581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.354590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.358102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.809 [2024-07-22 10:55:09.367210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.367779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.367803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.367811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.368029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.368246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.368254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.368261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.370525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.809 [2024-07-22 10:55:09.371770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 [2024-07-22 10:55:09.381108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.381802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.381839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.381850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.382088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.382309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.382318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.382325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.385840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.809 [2024-07-22 10:55:09.394934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.395484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.395521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.395534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.395774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.395996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.396004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.396012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.399527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 Malloc0 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.809 [2024-07-22 10:55:09.408827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.409280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.409298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.409307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.409530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.409747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.409755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.409762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.413269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.809 [2024-07-22 10:55:09.422772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 [2024-07-22 10:55:09.423325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:03.809 [2024-07-22 10:55:09.423362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c170 with addr=10.0.0.2, port=4420 00:39:03.809 [2024-07-22 10:55:09.423374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108c170 is same with the state(5) to be set 00:39:03.809 [2024-07-22 10:55:09.423621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108c170 (9): Bad file descriptor 00:39:03.809 [2024-07-22 10:55:09.423843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:03.809 [2024-07-22 10:55:09.423851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:03.809 [2024-07-22 10:55:09.423859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:03.809 [2024-07-22 10:55:09.427366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:03.809 [2024-07-22 10:55:09.436111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.809 [2024-07-22 10:55:09.436667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.809 10:55:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2257308 00:39:04.068 [2024-07-22 10:55:09.565271] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:12.207 00:39:12.207 Latency(us) 00:39:12.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:12.207 Verification LBA range: start 0x0 length 0x4000 00:39:12.207 Nvme1n1 : 15.00 8254.21 32.24 10030.72 0.00 6974.81 778.24 16274.77 00:39:12.207 =================================================================================================================== 00:39:12.207 Total : 8254.21 32.24 10030.72 0.00 6974.81 778.24 16274.77 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:12.468 rmmod nvme_tcp 00:39:12.468 rmmod nvme_fabrics 00:39:12.468 rmmod nvme_keyring 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2258454 ']' 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2258454 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2258454 ']' 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2258454 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:12.468 10:55:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258454 00:39:12.468 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:12.468 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:12.468 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258454' 00:39:12.468 killing process with pid 2258454 00:39:12.468 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2258454 00:39:12.468 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2258454 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:12.729 10:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.640 10:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:14.640 00:39:14.640 real 0m28.634s 00:39:14.640 user 1m2.945s 00:39:14.640 sys 0m7.776s 00:39:14.640 10:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:14.640 10:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:14.640 ************************************ 00:39:14.640 END TEST nvmf_bdevperf 00:39:14.640 ************************************ 00:39:14.640 10:55:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:14.640 10:55:20 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:14.640 10:55:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:14.640 10:55:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:14.640 10:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:14.640 ************************************ 00:39:14.640 START TEST nvmf_target_disconnect 00:39:14.640 ************************************ 00:39:14.640 10:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:14.901 * Looking for test storage... 00:39:14.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.901 10:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:39:14.902 10:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:23.084 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:23.084 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:23.084 Found net devices under 0000:31:00.0: cvl_0_0 00:39:23.084 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:23.085 Found net devices under 0000:31:00.1: cvl_0_1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:23.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:39:23.085 00:39:23.085 --- 10.0.0.2 ping statistics --- 00:39:23.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.085 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:39:23.085 00:39:23.085 --- 10.0.0.1 ping statistics --- 00:39:23.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.085 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:23.085 ************************************ 00:39:23.085 START TEST nvmf_target_disconnect_tc1 00:39:23.085 ************************************ 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.085 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.085 [2024-07-22 10:55:28.628632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.085 [2024-07-22 10:55:28.628704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d67a0 with addr=10.0.0.2, port=4420 00:39:23.085 [2024-07-22 10:55:28.628742] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:23.085 [2024-07-22 10:55:28.628767] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:23.085 [2024-07-22 10:55:28.628775] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:39:23.085 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:23.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:23.085 Initializing NVMe Controllers 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:23.085 00:39:23.085 real 0m0.119s 00:39:23.085 user 0m0.045s 00:39:23.085 sys 0m0.073s 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:23.085 ************************************ 00:39:23.085 END TEST nvmf_target_disconnect_tc1 00:39:23.085 ************************************ 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:23.085 ************************************ 00:39:23.085 START TEST nvmf_target_disconnect_tc2 00:39:23.085 ************************************ 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2264985 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2264985 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2264985 ']' 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:23.085 10:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.086 [2024-07-22 10:55:28.781045] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:39:23.086 [2024-07-22 10:55:28.781101] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:23.346 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.346 [2024-07-22 10:55:28.875550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:23.346 [2024-07-22 10:55:28.922095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:23.346 [2024-07-22 10:55:28.922145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:23.346 [2024-07-22 10:55:28.922154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:23.346 [2024-07-22 10:55:28.922161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:23.346 [2024-07-22 10:55:28.922167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:23.346 [2024-07-22 10:55:28.922318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:23.346 [2024-07-22 10:55:28.922445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:23.346 [2024-07-22 10:55:28.922591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:23.346 [2024-07-22 10:55:28.922592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.916 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 Malloc0 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 [2024-07-22 10:55:29.634223] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 [2024-07-22 10:55:29.674490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2265075 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:24.177 10:55:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:24.177 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.093 10:55:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2264985 00:39:26.093 10:55:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.093 starting I/O failed 00:39:26.093 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Read completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 Write completed with error (sct=0, sc=8) 00:39:26.094 starting I/O failed 00:39:26.094 [2024-07-22 10:55:31.707719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:26.094 [2024-07-22 10:55:31.708182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.708203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.708665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.708703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.709051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.709064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.709375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.709386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.709713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.709749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.710096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.710109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.710336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.710346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.710793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.710829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.711191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.711204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.711652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.711689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.711932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.711945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.712262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.712272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.712646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.712657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.713004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.713014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.713401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.713412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.713595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.713605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.713787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.713800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.714135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.714144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.714423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.714433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.714627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.714637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.714845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.714854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.715185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.715199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.715541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.715552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.715886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.715896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.716240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.716250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.716594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.716604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.716931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.716941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.717271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.717281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.717487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.717810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.717820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.718125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.718135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.718428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.718438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.718773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.718783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.094 [2024-07-22 10:55:31.719101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.094 [2024-07-22 10:55:31.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.094 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.719427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.719437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.719719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.719729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.720078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.720089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.720999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.721011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.721330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.721340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.721707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.721716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.722024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.722034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.722372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.722382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.722582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.722594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.722879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.722889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.723156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.723483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.723493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.723825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.723835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.724189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.724199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.724579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.724589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.724919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.724929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.725240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.725250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.725556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.725566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.725767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.725776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.725972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.725981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.726293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.726303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.726657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.727042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.727052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.727343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.727353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.727676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.727686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.728033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.728043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.728364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.728710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.728720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.729036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.729047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.729382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.729391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.729721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.729731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.730050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.730060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.730354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.730363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.730749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.730760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.731139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.731148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.731504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.731514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.731905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.731915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.732223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.732233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.732579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.732589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.732890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.095 [2024-07-22 10:55:31.732900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.095 qpair failed and we were unable to recover it. 00:39:26.095 [2024-07-22 10:55:31.733209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.733219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.733440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.733450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.733759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.733768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.733940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.733950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.734239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.734250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.734580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.734590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.734898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.734907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.735140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.735150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.735464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.735474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.735766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.735776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.736120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.736130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.736513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.736523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.736835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.736845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.737152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.737161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.737370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.737380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.737756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.737768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.738073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.738082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.738410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.738420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.738763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.739108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.739117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.739473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.739483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.739797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.739807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.740143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.740153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.740487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.740497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.740831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.740840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.741158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.741168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.741535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.741545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.741894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.741903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.742243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.742252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.742574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.742584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.742871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.742881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.743203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.743213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.743513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.743523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.743873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.743885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.744231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.744242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.744555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.744565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.744845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.744854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.745120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.745130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.745469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.745479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.745800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.745809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.746123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.096 [2024-07-22 10:55:31.746133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.096 qpair failed and we were unable to recover it. 00:39:26.096 [2024-07-22 10:55:31.746460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.746471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.747655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.747678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.748014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.748026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.748303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.748313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.748511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.748852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.748862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.749190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.749200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.749386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.749400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.749587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.749596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.749963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.749973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.750334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.750344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.750667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.750677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.751003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.751012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.751240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.751250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.751630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.751642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.751857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.751867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.752240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.752250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.752580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.752590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.752976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.752985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.753279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.753289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.753625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.753635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.753840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.753850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.754141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.754150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.754480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.754490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.754808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.754819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.755101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.755112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.755385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.755398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.755616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.755626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.755943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.755954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.756256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.756268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.756586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.756599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.756942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.756952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.757260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.757269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.757593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.757603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.757929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.757939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.758229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.758239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.758577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.758588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.758916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.758926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.759139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.759148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.759344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.759353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.097 qpair failed and we were unable to recover it. 00:39:26.097 [2024-07-22 10:55:31.759641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.097 [2024-07-22 10:55:31.759651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.759938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.759948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.760282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.760292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.760590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.760600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.760927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.760937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.761274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.761284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.761622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.761632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.761830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.761839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.762195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.762522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.762531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.762762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.762772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.763082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.763091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.763413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.763423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.763748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.763759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.764572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.764592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.764880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.764891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.765072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.765083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.765431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.765442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.765767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.765777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.766076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.766085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.766414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.766426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.766648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.766658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.766990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.767001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.767376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.767386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.767695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.767705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.768048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.768057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.768406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.768416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.098 [2024-07-22 10:55:31.768733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.098 [2024-07-22 10:55:31.768743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.098 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.768955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.768965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.769281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.769293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.769633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.769643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.769987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.769996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.770306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.770316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.770655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.770665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.770917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.771259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.771269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.771608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.771916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.771925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.772220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.772229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.772549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.772560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.772890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.772900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.773093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.773104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.773402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.773413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.773721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.773730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.774052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.774061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.774380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.774390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.774692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.774702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.774899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.774910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.775221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.775230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.775574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.775584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.775933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.775943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.776259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.776268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.776585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.776595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.776894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.776904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.777330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.777339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.777703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.777713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.778046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.778058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.778388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.778401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.778719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.779111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.779121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.779407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.779417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.779769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.779778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.779971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.779981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.780326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.780336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.780650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.781004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.781014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.781353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.781364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.099 [2024-07-22 10:55:31.781604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.099 [2024-07-22 10:55:31.781614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.099 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.781959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.781970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.782257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.782267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.782570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.782580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.782881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.782891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.783237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.783247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.783472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.783482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.783718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.783728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.784021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.784030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.784338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.784348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.784542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.784552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.784875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.784884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.785206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.785216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.785530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.785540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.785830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.785840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.786209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.786219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.786549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.786561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.786887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.786896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.787235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.787244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.787565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.787575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.787904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.787914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.788263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.788281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.100 [2024-07-22 10:55:31.788599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.100 [2024-07-22 10:55:31.788609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.100 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.788840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.788852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.789202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.789213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.789538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.789548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.789883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.789893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.790215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.790224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.790566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.790576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.790926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.790936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.791264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.791275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.791629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.791639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.791960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.791970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.792194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.792204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.792517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.792527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.792718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.792728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.793050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.793060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.793408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.793418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.793643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.793653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.793843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.793853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.794148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.794157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.794361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.794370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.794680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.794690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.795028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.795039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.795378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.795388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.795699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.795710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.796091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.796101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.796405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.796422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.796752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.796761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.797103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.797113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.797454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.797465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.373 [2024-07-22 10:55:31.797783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.373 [2024-07-22 10:55:31.797794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.373 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.798135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.798144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.798452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.798462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.798795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.798805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.799119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.799129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.799475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.799484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.799843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.799853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.800161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.800171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.800526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.800536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.800880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.800889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.801231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.801240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.801582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.801592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.801910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.801920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.802251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.802260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.802582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.802591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.802919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.802928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.803229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.803239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.803583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.803593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.803879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.803889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.804092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.804102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.804341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.804351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.804525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.804843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.804853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.805208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.805218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.805531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.805541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.805853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.805863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.806210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.806220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.806539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.806550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.806901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.806910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.807140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.807149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.807478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.807488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.807832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.808183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.808192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.808501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.808513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.808814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.808823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.809136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.809145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.809516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.809526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.809863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.809872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.810200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.810210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.810525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.810535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.810852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.810861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.374 qpair failed and we were unable to recover it. 00:39:26.374 [2024-07-22 10:55:31.811154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.374 [2024-07-22 10:55:31.811163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.811486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.811496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.811833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.812042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.812052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.812315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.812325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.812672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.812682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.812998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.813008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.813335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.813345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.813668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.813679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.814011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.814021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.814341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.814351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.814676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.814686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.815025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.815035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.815241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.815250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.815501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.815510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.815835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.815844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.816048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.816058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.816362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.816371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.816701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.816711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.817126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.817137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.817340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.817350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.817624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.817634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.817973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.817982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.818290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.818299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.818623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.818632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.818946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.818956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.819275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.819284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.819679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.819735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.819960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.820331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.820340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.820706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.820715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.821049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.821058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.821374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.821383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.821757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.821768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.822071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.822080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.822433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.822442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.822790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.822799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.823111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.823121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.823485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.823495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.823688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.375 [2024-07-22 10:55:31.823698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.375 qpair failed and we were unable to recover it. 00:39:26.375 [2024-07-22 10:55:31.824059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.824069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.824287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.824297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.824628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.824639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.824965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.824975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.825346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.825357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.825648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.825657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.825960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.825969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.826327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.826336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.826702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.826712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.826827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.826836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.827162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.827172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.827515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.827525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.827859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.828031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.828041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.828280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.828289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.828622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.828631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.828820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.828829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.829127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.829136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.829469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.829479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.829684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.829694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.830004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.830014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.830380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.830389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.830750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.830760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.831050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.831060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.831292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.831302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.831649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.831660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.831970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.831980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.832197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.832207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.832497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.832507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.832832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.832842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.833138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.833466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.833476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.833784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.833794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.834144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.834153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.834472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.834816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.834825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.835089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.835098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.835422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.835432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.835709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.835718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.836017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.376 [2024-07-22 10:55:31.836027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.376 qpair failed and we were unable to recover it. 00:39:26.376 [2024-07-22 10:55:31.836321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.836331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.836559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.836569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.836867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.836877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.837186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.837374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.837384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.837760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.837770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.838112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.838121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.838467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.838479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.838797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.838806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.839186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.839196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.839554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.839564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.839859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.839868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.840251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.840260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.840613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.840623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.840938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.840947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.841289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.841299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.841471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.841480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.841846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.841856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.842080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.842089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.842438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.842447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.842835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.842844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.843130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.377 [2024-07-22 10:55:31.843140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.377 qpair failed and we were unable to recover it. 00:39:26.377 [2024-07-22 10:55:31.843467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.843477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.843707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.843716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.844074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.844083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.844380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.844390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.844721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.844731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.845057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.845068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.845407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.845417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.845787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.845796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.846137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.846147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.846476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.846485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.846863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.846874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.847211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.847221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.847524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.847545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.847854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.847864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.848170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.848180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.848527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.848537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.848824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.848833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.849176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.849185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.849519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.849530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.849835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.849844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.850162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.850172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.850477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.850487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.850839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.850849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.851161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.851171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.851522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.851533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.851841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.378 [2024-07-22 10:55:31.851851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.378 qpair failed and we were unable to recover it. 00:39:26.378 [2024-07-22 10:55:31.852173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.852182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.852519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.852529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.852841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.852850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.853175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.853185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.853372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.853384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.853702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.853713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.853943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.853954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.854250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.854260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.854588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.854598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.854988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.854997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.855316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.855326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.855642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.855652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.855992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.856002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.856313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.856325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.856563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.856574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.856884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.856894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.857191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.857202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.857547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.857557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.857879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.857888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.858104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.858113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.858333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.858342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.858670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.858679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.858975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.858985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.859324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.859333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.859663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.859673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.860042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.860051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.860377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.860387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.860724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.860735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.861118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.861128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.861349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.861359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.861682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.861692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.862044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.862055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.862182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.862192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.862542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.862552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.862853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.862863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.863205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.863214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.863531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.863541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.863939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.863949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.864257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.864267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.864576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.864586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.864898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.379 [2024-07-22 10:55:31.864907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.379 qpair failed and we were unable to recover it. 00:39:26.379 [2024-07-22 10:55:31.865244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.865253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.865590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.865600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.865918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.865927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.866298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.866308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.866607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.866617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.866952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.866961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.867298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.867308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.867662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.867673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.868068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.868077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.868392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.868406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.868745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.868755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.869052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.869062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.869414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.869424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.869764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.869773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.870084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.870093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.870434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.870444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.870743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.870752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.870897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.870906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.871215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.871225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.871373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.871382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.871684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.871694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.871892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.871903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.872100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.872110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.872493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.872503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.872822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.872832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.873128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.873137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.873480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.873490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.873734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.874100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.874110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.874429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.874438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.874700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.875002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.875013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.875355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.875366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.875716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.875725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.876023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.876033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.876374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.876384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.876704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.876714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.877066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.877077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.877420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.877431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.877777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.877786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.878126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.380 [2024-07-22 10:55:31.878137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.380 qpair failed and we were unable to recover it. 00:39:26.380 [2024-07-22 10:55:31.878514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.878524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.878962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.878971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.879195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.879204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.879523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.879533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.879868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.879877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.880216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.880226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.880552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.880561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.880875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.880884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.881228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.881238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.881579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.881589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.881778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.881788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.882085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.882094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.882373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.882382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.882615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.882625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.882971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.883369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.883379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.883687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.883697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.884046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.884056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.884412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.884422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.884754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.884764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.885101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.885110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.885453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.885464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.885803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.885813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.886125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.886135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.886483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.886493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.886813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.886822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.887161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.887173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.887380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.887389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.887607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.887617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.887954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.888294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.888629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.888640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.888971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.888981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.889344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.889703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.889713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.890053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.890062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.890384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.890722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.890731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.891036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.891046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.891385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.891399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.381 [2024-07-22 10:55:31.891722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.381 [2024-07-22 10:55:31.891732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.381 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.892077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.892087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.892419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.892430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.892750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.892759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.893104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.893114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.893451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.893461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.893782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.893792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.894094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.894103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.894449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.894459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.894862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.894871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.895189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.895199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.895586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.895595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.895895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.896195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.896204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.896507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.896517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.896854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.896864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.897167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.897177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.897530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.897872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.897882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.898227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.898238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.898545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.898555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.898869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.898879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.899213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.899222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.899550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.899560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.899748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.899758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.900059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.900068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.900365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.900374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.900602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.900611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.900966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.900976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.901270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.901280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.901592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.901602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.901896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.901906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.902246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.902256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.902576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.902587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.902942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.903253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.382 [2024-07-22 10:55:31.903264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.382 qpair failed and we were unable to recover it. 00:39:26.382 [2024-07-22 10:55:31.903588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.903598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.903901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.903911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.904251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.904261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.904559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.904909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.904918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.905112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.905123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.905424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.905434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.906113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.906448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.906458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.906777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.906786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.907099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.907109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.907440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.907449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.907677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.907686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.908012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.908021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.908368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.908377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.908688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.908698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.909032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.909041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.909180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.909192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.909497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.909507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.909837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.909847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.910250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.910259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.910570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.910579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.910920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.910929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.911225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.911586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.911596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.911939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.911949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.912264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.912274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.912491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.912506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.912840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.912849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.913203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.913212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.913406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.913417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.913713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.913723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.914042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.914052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.914389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.914406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.914741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.914750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.915065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.915426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.915435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.915799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.915809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.916127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.916143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.383 [2024-07-22 10:55:31.916493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.383 [2024-07-22 10:55:31.916504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.383 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.916850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.916859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.917161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.917171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.917510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.917520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.917866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.917875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.918197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.918217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.918529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.918539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.918884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.918893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.919209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.919218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.919514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.919524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.919866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.919875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.920179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.920189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.920470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.920479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.920764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.920774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.921136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.921145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.921394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.921409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.921690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.921699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.922028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.922037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.922271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.922280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.922575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.922585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.922924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.922933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.923272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.923281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.923578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.923587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.923904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.923913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.924254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.924264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.924610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.924621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.925024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.925034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.925354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.925363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.925725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.925735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.926049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.926059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.926358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.926368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.926681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.926692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.927045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.927059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.927291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.927300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.927617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.927628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.927951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.927961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.928320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.928330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.928670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.928681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.929037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.929047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.929393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.929417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.929609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.384 [2024-07-22 10:55:31.929620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.384 qpair failed and we were unable to recover it. 00:39:26.384 [2024-07-22 10:55:31.929951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.929960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.930307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.930317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.930646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.930656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.930952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.930962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.931278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.931288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.931627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.931637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.931934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.931944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.932280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.932289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.932629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.932639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.932957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.932967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.933154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.933164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.933501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.933511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.933732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.933741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.934052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.934061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.934373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.934382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.934609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.934620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.934846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.935051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.935060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.935476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.935485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.935794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.935804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.936145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.936156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.936483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.936493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.936809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.936818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.937160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.937170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.937530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.937540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.937851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.937861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.938186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.938195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.938514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.938835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.938844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.939179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.939188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.939435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.939445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.939757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.939767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.940130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.940140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.940462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.940472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.940819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.940829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.941117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.941126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.941412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.941421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.941631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.941641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.941965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.941974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.942271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.942281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.385 [2024-07-22 10:55:31.942493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.385 [2024-07-22 10:55:31.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.385 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.942694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.942704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.943029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.943039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.943392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.943406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.943748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.943758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.944084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.944094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.944471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.944481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.944789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.944798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.945131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.945141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.945485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.945495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.945840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.945849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.946165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.946175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.946551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.946561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.946873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.946882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.947080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.947089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.947442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.947451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.947787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.947797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.948119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.948128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.948471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.948480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.948820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.948831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.949143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.949153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.949388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.949438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.949788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.949798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.950150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.950159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.950479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.950488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.950813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.950822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.951148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.951159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.951499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.951509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.951868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.951877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.952130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.952139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.952343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.952353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.952693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.952703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.953012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.953022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.953360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.953369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.953700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.953710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.954046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.954055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.954306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.386 [2024-07-22 10:55:31.954315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.386 qpair failed and we were unable to recover it. 00:39:26.386 [2024-07-22 10:55:31.954641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.954650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.954963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.954973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.955312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.955321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.955642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.955652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.955865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.955875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.956190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.956200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.956542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.956552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.956887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.956896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.957243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.957253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.957589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.957601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.957939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.957948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.958291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.958301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.958621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.958631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.958926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.958937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.959254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.959264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.959601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.959612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.959959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.959970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.960278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.960288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.961403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.961426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.961829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.961840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.962142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.962153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.962477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.962809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.962818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.963165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.963174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.963516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.963527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.963764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.963774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.964617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.964638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.964976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.964986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.965335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.965345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.965568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.965579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.965904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.966203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.966213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.966462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.966472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.966815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.967179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.967189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.967414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.967424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.967600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.967610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.967938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.967948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.968274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.968283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.387 qpair failed and we were unable to recover it. 00:39:26.387 [2024-07-22 10:55:31.968593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.387 [2024-07-22 10:55:31.968603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.968941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.968950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.969249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.969259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.969555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.969565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.969886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.969896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.970244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.970254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.970579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.970589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.970919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.970929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.971267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.971277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.971612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.971623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.971944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.971954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.972280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.972291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.972636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.972646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.972984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.972993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.973302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.973313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.973640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.973650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.973989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.973999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.974342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.974352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.974531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.974542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.974827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.974836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.975178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.975188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.975528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.975538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.975878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.975888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.976185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.976195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.976537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.976547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.976892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.976902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.977215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.977224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.977545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.977555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.977876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.977886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.978105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.978115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.978447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.978458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.978777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.978787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.979067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.979077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.979270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.979280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.979720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.979730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.980030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.980041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.980380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.980390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.980725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.980735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.981058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.981069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.981376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.981386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.981727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.981737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.388 qpair failed and we were unable to recover it. 00:39:26.388 [2024-07-22 10:55:31.982057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.388 [2024-07-22 10:55:31.982067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.982436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.982446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.982855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.982864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.983183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.983193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.983500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.983510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.983686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.983696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.984042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.984051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.984348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.984357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.984690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.984701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.985055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.985065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.985386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.985400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.985697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.985706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.986033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.986042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.986365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.986375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.986680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.986690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.987004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.987015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.987355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.987365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.987705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.987717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.988024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.988033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.988386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.988404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.988715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.988725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.989095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.989105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.989321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.989330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.989704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.989714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.989936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.989947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.990239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.990249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.990570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.990579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.990927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.990936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.991363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.991372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.991719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.991729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.992072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.992081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.992424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.992433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.992681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.992690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.992899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.992909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.993101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.993111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.993469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.993478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.993795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.993804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.994072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.994082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.994337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.994346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.994625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.994635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.994860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.389 [2024-07-22 10:55:31.994869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.389 qpair failed and we were unable to recover it. 00:39:26.389 [2024-07-22 10:55:31.995195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.995204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.995618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.995627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.995942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.995951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.996277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.996287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.996603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.996614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.996976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.996985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.997287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.997297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.997510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.997520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.997850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.997859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.998200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.998210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.998549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.998561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.998856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.998867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.999158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.999167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.999471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.999481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:31.999818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:31.999828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.000232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.000242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.000561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.000571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.000902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.000911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.001221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.001420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.001430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.001771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.002108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.002118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.002506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.002517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.002719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.002729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.003010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.003020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.003356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.003365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.003574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.003584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.003895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.003904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.004246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.004255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.004568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.004578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.004782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.004791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.005180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.005190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.005510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.005520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.005829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.005838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.006046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.006056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.006286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.006296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.006591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.006601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.006895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.006905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.007098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.007108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.007436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.007446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.007757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.390 [2024-07-22 10:55:32.007766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.390 qpair failed and we were unable to recover it. 00:39:26.390 [2024-07-22 10:55:32.008090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.008100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.008482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.008493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.008823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.008833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.009155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.009164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.009418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.009428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.009681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.009690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.009922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.009932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.010266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.010276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.010538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.010548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.010898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.010909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.011227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.011239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.011562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.011572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.011900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.011910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.012215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.012225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.012353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.012689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.012700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.012922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.012932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.013241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.013252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.013572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.013582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.013906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.013915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.014247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.014609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.014619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.014948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.014959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.015178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.015187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.015373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.015383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.015615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.015624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.015821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.015832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.016142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.016152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.016341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.016351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.016662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.016679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.016901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.016911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.017240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.017250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.017487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.017496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.017856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.017877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.018179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.018191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.018535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.391 [2024-07-22 10:55:32.018547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.391 qpair failed and we were unable to recover it. 00:39:26.391 [2024-07-22 10:55:32.018753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.018763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.019094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.019107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.019452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.019462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.019676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.019685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.020011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.020020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.020198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.020209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.020569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.020579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.021635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.021658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.021958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.021970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.022167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.022177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.022522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.022533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.022855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.022864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.023176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.023187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.023498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.023508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.023828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.023838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.024149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.024166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.024485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.024495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.024828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.024837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.025162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.025172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.025393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.025783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.025793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.026116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.026126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.026809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.026829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.027125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.027135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.027366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.027376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.027710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.027721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.028039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.028049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.028375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.028384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.028694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.028707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.029048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.029058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.029386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.029425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.029723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.029732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.030047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.030057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.030386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.030401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.030739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.030750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.031057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.031376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.031387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.031746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.031756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.032063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.032073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.032393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.032413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.392 qpair failed and we were unable to recover it. 00:39:26.392 [2024-07-22 10:55:32.032619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.392 [2024-07-22 10:55:32.032628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.032957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.032968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.033200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.033553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.033564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.033746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.033755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.034020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.034029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.034369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.034379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.034786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.034796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.035140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.035150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.035576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.035585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.035764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.035775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.036120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.036129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.036468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.036479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.036793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.036802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.037141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.037151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.037476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.037487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.037812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.037822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.038144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.038153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.038477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.038487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.038816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.038826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.039164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.039174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.039514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.039525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.039721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.039731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.040069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.040079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.040392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.040411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.040647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.040657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.041011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.041021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.041261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.041272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.041460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.041471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.041789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.041798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.041995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.042005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.042356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.042366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.042694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.042704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.043051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.043061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.043406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.043417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.043738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.043748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.044059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.044074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.044406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.044416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.044670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.044680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.044962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.044971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.045271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.045281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.393 [2024-07-22 10:55:32.045482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.393 [2024-07-22 10:55:32.045495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.393 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.045705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.045714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.046006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.046016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.046259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.046268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.046603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.046613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.046948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.046958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.047286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.047295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.047659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.047669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.047971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.047981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.048332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.048342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.048697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.048706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.049039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.049048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.049403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.049413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.049728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.049738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.050113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.050123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.050301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.050313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.050662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.050672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.050988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.050998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.051206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.051216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.051538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.051548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.051884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.051894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.052240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.052249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.052569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.052579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.052909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.052918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.053147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.053156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.053374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.053384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.053712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.053721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.054020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.054030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.054346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.054356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.054666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.055004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.055013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.055253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.055263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.055586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.055596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.055916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.055927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.056243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.056253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.056567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.056577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.056921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.056931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.057251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.057261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.057582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.057592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.057902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.057912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.058264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.058274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.394 [2024-07-22 10:55:32.058539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.394 [2024-07-22 10:55:32.058549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.394 qpair failed and we were unable to recover it. 00:39:26.395 [2024-07-22 10:55:32.058867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.395 [2024-07-22 10:55:32.058879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.395 qpair failed and we were unable to recover it. 00:39:26.395 [2024-07-22 10:55:32.059113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.395 [2024-07-22 10:55:32.059122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.395 qpair failed and we were unable to recover it. 00:39:26.395 [2024-07-22 10:55:32.059473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.395 [2024-07-22 10:55:32.059483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.395 qpair failed and we were unable to recover it. 00:39:26.395 [2024-07-22 10:55:32.059836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.395 [2024-07-22 10:55:32.059846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.395 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.060088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.060099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.060421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.060433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.060781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.060790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.061112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.061122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.061447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.061457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.061817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.061826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.062165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.062174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.062455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.062465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.062787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.063133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.063142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.063485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.063504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.063845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.063855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.064180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.064190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.064423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.064433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.064731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.064741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.065073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.065083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.065308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.065317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.065532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.065542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.065863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.065872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.066178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.066188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.066516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.066526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.066814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.066823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.067131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.067141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.067457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.067469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.067763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.067772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.068091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.068101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.068439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.068449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.068792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.069134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.069144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.069441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.069451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.069755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.069764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.070082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.070092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.070407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.070417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.070809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.671 [2024-07-22 10:55:32.070819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.671 qpair failed and we were unable to recover it. 00:39:26.671 [2024-07-22 10:55:32.071127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.071137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.071364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.071375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.071696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.071707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.072023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.072034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.072382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.072392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.072739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.072748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.073037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.073046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.073385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.073399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.073695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.073705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.074033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.074043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.074338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.074348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.074671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.074680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.074975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.074984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.075276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.075285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.075612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.075622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.075961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.075971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.076309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.076319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.076674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.076684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.077072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.077082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.077409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.077419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.077758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.077768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.078105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.078322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.078332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.078620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.078630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.078945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.078955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.079347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.079356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.079794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.079804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.080107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.080117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.080443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.080453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.080686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.080695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.081049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.081058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.081274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.081284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.081596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.081606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.081913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.081922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.082135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.082145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.082376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.082386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.082710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.082721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.083029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.083039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.083354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.083364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.083580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.672 [2024-07-22 10:55:32.083591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.672 qpair failed and we were unable to recover it. 00:39:26.672 [2024-07-22 10:55:32.083939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.083949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.084267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.084277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.084536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.084545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.084761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.084771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.084990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.085000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.085355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.085364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.085818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.085828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.086251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.086575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.086585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.086937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.086947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.087263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.087273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.087493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.087502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.087720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.087729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.087943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.087953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.088218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.088227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.088554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.088563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.088866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.088876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.089196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.089210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.089560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.089570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.089743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.089754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.090097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.090107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.090466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.090476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.090815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.090824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.091032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.091041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.091210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.091221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.091518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.091528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.091854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.091864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.092215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.092225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.092417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.092428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.092654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.092663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.093047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.093056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.093371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.093380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.093759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.093975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.093984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.094279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.094289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.094592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.094602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.094970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.094980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.095325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.095334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.095601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.095611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.095927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.095937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.673 [2024-07-22 10:55:32.096171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.673 [2024-07-22 10:55:32.096181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.673 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.096497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.096507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.096837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.096846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.097193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.097202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.097533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.097545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.097867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.098275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.098285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.098696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.098706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.099040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.099050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.099392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.099407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.099735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.099744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.100084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.100093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.100403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.100413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.100763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.100773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.101094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.101103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.101410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.101421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.101776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.101785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.102100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.102111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.102433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.102443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.102774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.102783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.103093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.103102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.103403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.103413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.103714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.103723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.104077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.104086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.104306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.104316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.104550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.104560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.104844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.104854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.105161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.105171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.105524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.105534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.105875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.105885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.106187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.106196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.106426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.106436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.106805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.106814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.107159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.107168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.107490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.107499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.107726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.107736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.108052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.108062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.108308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.108317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.108664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.108674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.108974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.108984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.109313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.109322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.674 qpair failed and we were unable to recover it. 00:39:26.674 [2024-07-22 10:55:32.109429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.674 [2024-07-22 10:55:32.109439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.109561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.109571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.109883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.109892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.110213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.110598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.110775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.110786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.110990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.110999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.111316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.111326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.111663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.111672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.111847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.111857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.112151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.112160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.112389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.112403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.112754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.112763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.112969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.112978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.113293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.113302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.113660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.113670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.114026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.114035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.114361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.114371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.114676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.114686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.115052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.115375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.115384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.115723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.115732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.116049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.116060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.116371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.116380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.116798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.116807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.117112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.117451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.117460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.117752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.117761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.118083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.118092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.118434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.118444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.118677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.118686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.118989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.119001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.119338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.675 [2024-07-22 10:55:32.119348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.675 qpair failed and we were unable to recover it. 00:39:26.675 [2024-07-22 10:55:32.119648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.119658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.119990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.119999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.120354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.120363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.120798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.120808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.121121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.121131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.121426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.121436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.121822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.121831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.122138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.122147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.122475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.122485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.122816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.122826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.123178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.123187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.123503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.123513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.123858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.123867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.124164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.124175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.124518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.124854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.124863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.125174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.125184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.125467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.125477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.125680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.125689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.125999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.126008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.126333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.126343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.126668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.126678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.127009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.127019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.127392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.127406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.127722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.127731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.128125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.128136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.128458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.128468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.128794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.128803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.129110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.129120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.129452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.129462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.129771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.129780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.130107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.130453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.130463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.130768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.130777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.131113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.131122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.131474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.131484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.131828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.131837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.132154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.132164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.132509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.132519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.132839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.132848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.676 [2024-07-22 10:55:32.133184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.676 [2024-07-22 10:55:32.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.676 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.133540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.133550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.133854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.133864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.134191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.134200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.134509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.134520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.134838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.134847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.135145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.135154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.135470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.135480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.135822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.135831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.136159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.136175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.136489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.136498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.136716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.136725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.137079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.137090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.137410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.137420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.137771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.137780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.138072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.138082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.138444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.138453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.138783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.138792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.139005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.139015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.139220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.139231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.139539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.139549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.139716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.139726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.140067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.140076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.140248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.140258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.140646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.140656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.141001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.141010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.141309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.141319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.141560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.141570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.141777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.141786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.142110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.142119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.142497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.142508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.142819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.142828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.143151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.143161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.143481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.143491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.143698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.143707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.144017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.144026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.144323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.144332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.144654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.144664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.145007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.145017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.145319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.145555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.145565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.677 qpair failed and we were unable to recover it. 00:39:26.677 [2024-07-22 10:55:32.145853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.677 [2024-07-22 10:55:32.145863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.146169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.146179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.146447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.146457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.146745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.146755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.147080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.147089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.147430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.147440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.147769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.147778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.148098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.148108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.148429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.148764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.148773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.148997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.149006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.149322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.149331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.149656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.149666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.150009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.150018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.150366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.150376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.150714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.150724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.151034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.151045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.151410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.151420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.151735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.151745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.152036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.152045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.152387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.152400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.152719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.152728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.153073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.153082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.153419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.153753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.153762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.154016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.154025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.154386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.154399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.154689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.154698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.155042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.155052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.155353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.155363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.155662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.155672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.155998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.156008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.156335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.156345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.156768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.156777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.157094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.157103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.157443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.157453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.157828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.157837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.158136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.158146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.158330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.158340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.158688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.158699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.159022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.159031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.678 qpair failed and we were unable to recover it. 00:39:26.678 [2024-07-22 10:55:32.159369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.678 [2024-07-22 10:55:32.159378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.159705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.159715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.160052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.160378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.160389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.160707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.161063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.161074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.161416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.161427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.161789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.161800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.161990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.162000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.162318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.162327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.162674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.162684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.162993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.163003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.163344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.163353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.163688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.163707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.164042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.164052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.164356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.164367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.164698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.164708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.165002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.165012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.165353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.165362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.165688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.165699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.166019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.166028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.166370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.166380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.166722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.166733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.167029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.167040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.167381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.167390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.167743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.167757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.168095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.168104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.168299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.168309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.168628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.168638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.168974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.168984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.169249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.169259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.169586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.169596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.169935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.169945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.170274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.170283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.170569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.170578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.679 qpair failed and we were unable to recover it. 00:39:26.679 [2024-07-22 10:55:32.170766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.679 [2024-07-22 10:55:32.170776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.170969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.170979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.171209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.171219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.171569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.171580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.171906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.171915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.172291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.172300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.172647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.172657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.172999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.173008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.173319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.173330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.173720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.173730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.174043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.174052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.174375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.174384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.174710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.174720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.175086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.175095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.175416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.175425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.175752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.175763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.176085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.176095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.176441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.176451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.176648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.176659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.176938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.176947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.177176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.177186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.177521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.177531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.177884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.177893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.178200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.178210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.178543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.178960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.178969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.179283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.179293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.179638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.179647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.179953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.179963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.180314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.180323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.180736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.180746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.181049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.181064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.181384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.181394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.181707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.181718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.181949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.181959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.182276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.182287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.182634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.182644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.182986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.182996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.183337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.183346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.183644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.183655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.680 [2024-07-22 10:55:32.183976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.680 [2024-07-22 10:55:32.183986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.680 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.184290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.184300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.184639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.184649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.184871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.184881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.185205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.185215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.185567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.185577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.185923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.185933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.186271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.186280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.186601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.186612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.186959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.187308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.187317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.187632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.187642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.187982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.187991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.188334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.188343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.188648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.188657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.188974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.188984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.189202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.189212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.189520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.189529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.189845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.189856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.190169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.190179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.190499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.190509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.190831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.190841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.191127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.191136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.191460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.191471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.191810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.191819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.192123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.192132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.192424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.192434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.192769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.192779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.193080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.193089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.193440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.193747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.193757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.194100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.194110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.194432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.194442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.194765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.194774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.195079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.195089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.195471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.195480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.195762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.195771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.196113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.196122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.196461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.196471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.196839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.196848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.197068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.197078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.681 [2024-07-22 10:55:32.197369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.681 [2024-07-22 10:55:32.197379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.681 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.197760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.197770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.198016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.198026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.198242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.198565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.198576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.198914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.198924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.199266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.199276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.199621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.199630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.199933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.199944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.200297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.200306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.200636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.200646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.200992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.201001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.201301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.201311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.201684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.201693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.202012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.202022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.202340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.202349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.202589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.202600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.202899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.202909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.203217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.203227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.203552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.203562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.203873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.203882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.204201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.204210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.204555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.204565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.204858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.204868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.205078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.205088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.205423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.205433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.205746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.205755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.206155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.206165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.206401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.206412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.206755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.206765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.206949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.206960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.207280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.207292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.207679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.207689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.208017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.208026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.208228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.208238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.208560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.208570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.208880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.208890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.209234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.209243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.209589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.209600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.209954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.209980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.210291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.682 [2024-07-22 10:55:32.210309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.682 qpair failed and we were unable to recover it. 00:39:26.682 [2024-07-22 10:55:32.210627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.210640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.210908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.210918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.211247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.211256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.211621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.211632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.211970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.211979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.212213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.212222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.212463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.212473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.212824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.212833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.213177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.213188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.213568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.213578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.213900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.213909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.214137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.214146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.214365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.214375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.214680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.214690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.215012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.215022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.215247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.215256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.215574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.215584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.215752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.215762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.216167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.216176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.216563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.216573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.216899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.216908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.217299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.217599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.217610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.217828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.217838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.218190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.218200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.218483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.218494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.218798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.218808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.219053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.219063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.219356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.219366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.219701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.219711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.220012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.220023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.220368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.220381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.220601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.220610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.221003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.221012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.221368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.221378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.221716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.221726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.222074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.222085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.222436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.222446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.222782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.222792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.223108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.223117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.223468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.223478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.683 qpair failed and we were unable to recover it. 00:39:26.683 [2024-07-22 10:55:32.223822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.683 [2024-07-22 10:55:32.223831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.224029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.224039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.224363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.224372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.224704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.224714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.225025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.225035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.225342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.225352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.225672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.226010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.226019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.226281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.226290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.226613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.226623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.226978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.226988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.227319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.227328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.227651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.227662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.227996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.228005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.228386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.228754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.228764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.229026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.229035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.229356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.229367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.229679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.229689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.230034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.230044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.230389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.230406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.230732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.230741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.231084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.231094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.231414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.231424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.231695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.231704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.232040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.232049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.232391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.232405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.232752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.232762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.233088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.233098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.233428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.233437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.233921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.233931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.234240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.234250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.234588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.234598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.234826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.234835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.235132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.235141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.235608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.684 [2024-07-22 10:55:32.235618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.684 qpair failed and we were unable to recover it. 00:39:26.684 [2024-07-22 10:55:32.235953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.235963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.236191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.236200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.236524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.236533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.236849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.236859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.237200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.237210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.237527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.237537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.237877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.237887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.238189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.238199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.238480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.238492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.238825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.238835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.239122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.239131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.239333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.239342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.239690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.239700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.240010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.240021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.240316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.240325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.240657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.240667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.240960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.240969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.241211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.241220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.241536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.241545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.241764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.241773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.242069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.242079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.242371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.242381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.242692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.242703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.243056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.243065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.243382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.243392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.243725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.243734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.244048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.244422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.244433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.244804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.244814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.245012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.245022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.245328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.245337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.245741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.245750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.246098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.246107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.246444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.246453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.246763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.246773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.247089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.247099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.247420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.247430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.247784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.247794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.248115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.248125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.248410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.248420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.248603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.685 [2024-07-22 10:55:32.248614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.685 qpair failed and we were unable to recover it. 00:39:26.685 [2024-07-22 10:55:32.248938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.248947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.249312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.249321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.249725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.249735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.250089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.250099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.250403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.250413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.250651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.250660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.250981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.250991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.251309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.251318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.251642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.251653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.251882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.251892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.252234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.252244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.252540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.252550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.252909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.252918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.253225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.253235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.253573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.253582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.253905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.253915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.254130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.254140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.254491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.254500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.254716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.254725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.255060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.255069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.255293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.255302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.255627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.255637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.255950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.255960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.256312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.256322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.256614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.256624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.256939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.256949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.257183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.257193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.257431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.257441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.257775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.257785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.258017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.258026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.258355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.258364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.258682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.258691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.259030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.259039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.259347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.259356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.259526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.259535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.259785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.259797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.260127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.260136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.260471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.260481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.260783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.260793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.261112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.261121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.261541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.686 [2024-07-22 10:55:32.261551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.686 qpair failed and we were unable to recover it. 00:39:26.686 [2024-07-22 10:55:32.261860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.261869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.262179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.262189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.262526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.262535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.262737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.262746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.263072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.263082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.263311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.263321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.263724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.263734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.263957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.263967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.264280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.264289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.264505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.264515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.264866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.264876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.265198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.265208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.265584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.265595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.265941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.266180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.266189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.266483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.266492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.266752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.266762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.266982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.266991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.267292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.267301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.267626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.267635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.267839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.267848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.268166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.268176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.268498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.268508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.268838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.268848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.269152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.269162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.269392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.269406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.269770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.269779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.270114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.270123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.270330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.270341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.270706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.270716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.271072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.271082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.271383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.271399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.271688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.271698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.272004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.272014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.272244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.272253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.272580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.272589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.272805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.272814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.273124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.273134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.273327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.273336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.273564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.273573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.687 qpair failed and we were unable to recover it. 00:39:26.687 [2024-07-22 10:55:32.273883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.687 [2024-07-22 10:55:32.273893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.274247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.274257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.274681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.274691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.275019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.275029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.275379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.275389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.275658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.275668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.276027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.276037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.276381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.276390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.276712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.276724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.276928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.276938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.277273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.277283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.277702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.277712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.277916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.277926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.278244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.278254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.278690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.278707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.279031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.279040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.279251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.279261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.279575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.279584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.279917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.279927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.280267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.280277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.280615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.280624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.280951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.280960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.281281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.281291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.281679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.281688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.282003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.282013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.282210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.282220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.282548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.282558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.282769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.282778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.282987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.282996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.283392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.283405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.283846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.283855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.284067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.284076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.284401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.284411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.284647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.284656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.284837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.284847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.285176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.688 [2024-07-22 10:55:32.285186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.688 qpair failed and we were unable to recover it. 00:39:26.688 [2024-07-22 10:55:32.285487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.285497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.285858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.285867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.286267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.286552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.286562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.286898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.287115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.287125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.287453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.287465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.287720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.288049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.288058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.288414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.288423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.288763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.288773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.289089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.289099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.289301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.289310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.289680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.289692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.289995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.290005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.290373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.290382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.290632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.290650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.290987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.290996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.291300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.291310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.291705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.291714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.292018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.292028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.292342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.292352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.292665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.292675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.292988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.293405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.293415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.293742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.293752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.294061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.294070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.294392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.294406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.294729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.294739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.295093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.295103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.295409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.295419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.295741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.295750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.296089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.296099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.296452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.296462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.296650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.296660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.296998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.297007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.297187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.297197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.297394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.297407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.297655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.297664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.297888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.297898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.689 qpair failed and we were unable to recover it. 00:39:26.689 [2024-07-22 10:55:32.298200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.689 [2024-07-22 10:55:32.298212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.298541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.298552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.298848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.298858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.299149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.299158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.299502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.299511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.299812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.299821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.300142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.300151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.300551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.300561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.300868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.300878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.301170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.301179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.301479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.301489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.301818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.301827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.302172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.302182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.302485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.302495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.302743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.302753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.303076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.303085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.303422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.303433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.303747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.303756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.304074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.304084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.304400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.304410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.304670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.304680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.304992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.305002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.305308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.305318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.305628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.305638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.305987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.305997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.306305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.306314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.306733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.306743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.306962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.306974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.307361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.307371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.307681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.307691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.308033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.308338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.308348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.308693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.308702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.309038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.309048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.309399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.309410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.309617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.309627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.309966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.309975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.310326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.310661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.310671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.310877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.310887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.690 [2024-07-22 10:55:32.311202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.690 [2024-07-22 10:55:32.311211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.690 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.311535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.311545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.311865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.311875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.312196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.312205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.312524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.312534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.312832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.312841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.313174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.313183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.313438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.313448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.313740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.313750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.314099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.314109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.314457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.314787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.314796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.315137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.315146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.315462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.315472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.315812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.315822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.316171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.316180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.316475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.316485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.316678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.316688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.316916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.316926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.317246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.317255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.317568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.317579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.317929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.317939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.318260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.318270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.318614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.318624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.318967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.318976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.319297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.319306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.319676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.319686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.320004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.320013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.320322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.320681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.320690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.321011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.321021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.321340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.321350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.321574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.321583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.321895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.321904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.322259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.322585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.322596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.322937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.322946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.323267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.323276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.323581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.323591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.323910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.323919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.324255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.324264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.691 qpair failed and we were unable to recover it. 00:39:26.691 [2024-07-22 10:55:32.324458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.691 [2024-07-22 10:55:32.324469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.324795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.324806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.325126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.325135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.325447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.325456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.325772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.325781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.326005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.326015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.326340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.326350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.326675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.326686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.327001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.327011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.327332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.327342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.327659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.327668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.328012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.328022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.328374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.328383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.328724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.328734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.329057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.329069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.329412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.329423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.329620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.329630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.329857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.329866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.330049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.330059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.330407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.330796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.330805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.331157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.331167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.331480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.331490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.331800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.331810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.332170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.332180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.332480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.332489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.332702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.332712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.333031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.333041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.333353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.333364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.333776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.333786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.334082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.334092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.334430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.334440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.334680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.334689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.334958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.334969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.335288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.335298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.692 qpair failed and we were unable to recover it. 00:39:26.692 [2024-07-22 10:55:32.335621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.692 [2024-07-22 10:55:32.335632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.335991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.336001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.336341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.336351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.336675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.336685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.336998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.337008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.337352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.337362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.337658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.337671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.337987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.338346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.338356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.338704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.338714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.339059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.339069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.339296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.339307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.339644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.339655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.339877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.339888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.340190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.340201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.340522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.340532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.340828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.340837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.341176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.341186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.341503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.341513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.341838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.341848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.342165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.342174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.342321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.342331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.342603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.342613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.342945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.342955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.343254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.343264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.343568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.343577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.343902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.343911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.344238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.344247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.344589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.344599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.344912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.344921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.345238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.345248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.345571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.345581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.345888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.345898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.346240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.346555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.346566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.346895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.346904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.347222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.347231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.347578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.347588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.347905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.347915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.348204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.348214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.348518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.348528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.693 qpair failed and we were unable to recover it. 00:39:26.693 [2024-07-22 10:55:32.349432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.693 [2024-07-22 10:55:32.349452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.349778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.349789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.350091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.350457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.350467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.350819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.350829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.351170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.351180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.351500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.351510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.351819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.351828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.352172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.352183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.352514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.352525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.352849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.352858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.353194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.353204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.353559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.353569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.353882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.353892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.354217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.354227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.694 [2024-07-22 10:55:32.354579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.694 [2024-07-22 10:55:32.354589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.694 qpair failed and we were unable to recover it. 00:39:26.968 [2024-07-22 10:55:32.354936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.968 [2024-07-22 10:55:32.354947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.968 qpair failed and we were unable to recover it. 00:39:26.968 [2024-07-22 10:55:32.355244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.968 [2024-07-22 10:55:32.355254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.968 qpair failed and we were unable to recover it. 00:39:26.968 [2024-07-22 10:55:32.355550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.968 [2024-07-22 10:55:32.355559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.355867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.355877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.356172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.356182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.356480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.356489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.356705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.356715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.356953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.356963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.357272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.357281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.357561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.357571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.357815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.357824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.357963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.357972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.358278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.358637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.358647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.359061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.359070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.359389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.359410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.359630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.359640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.359969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.359981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.360309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.360319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.360555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.360894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.360903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.361246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.361256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.361629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.361638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.361977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.361986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.362308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.362318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.362672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.362681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.362991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.363001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.363350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.363359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.363730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.363739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.364057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.364066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.364399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.364408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.364722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.364731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.365081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.365090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.365433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.365444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.365780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.365790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.366102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.366112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.366349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.366359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.366614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.366624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.366866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.366875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.367068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.367077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.367388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.367401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.367782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.367791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.969 qpair failed and we were unable to recover it. 00:39:26.969 [2024-07-22 10:55:32.368103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.969 [2024-07-22 10:55:32.368114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.368457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.368466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.368795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.368806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.369116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.369126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.369452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.369462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.369811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.369820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.370144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.370153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.370446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.370456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.370702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.370712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.370955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.370965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.371271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.371281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.371672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.371683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.371891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.371900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.372228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.372237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.372446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.372456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.372821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.372831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.373136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.373145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.373352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.373361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.373646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.373656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.373979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.373989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.374312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.374321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.374497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.374507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.374802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.374812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.375063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.375072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.375371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.375381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.375745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.375758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.376049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.376059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.376303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.376312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.376623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.376633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.376906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.377218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.377228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.377473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.377483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.377712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.377722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.378041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.378051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.378279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.378289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.378600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.378610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.378931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.378940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.379283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.379292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.379693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.379702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.380019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.380029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.380247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.970 [2024-07-22 10:55:32.380256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.970 qpair failed and we were unable to recover it. 00:39:26.970 [2024-07-22 10:55:32.380584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.380593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.380900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.380910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.381256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.381265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.381569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.381586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.381956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.381965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.382273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.382282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.382588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.382597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.382879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.382888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.383220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.383230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.383569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.383578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.383921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.384226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.384235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.384674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.384684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.384999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.385008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.385376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.385386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.385720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.385730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.386088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.386098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.386411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.386422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.386713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.387036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.387046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.387450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.387459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.387798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.387807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.387999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.388008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.388342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.388351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.388670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.388679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.389025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.389035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.389325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.389334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.389541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.389551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.389735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.389745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.390068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.390079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.390386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.390401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.390714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.390725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.391076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.391085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.391385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.391399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.391759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.391769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.392091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.392100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.392488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.392498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.392827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.392836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.393144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.393153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.393567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.971 [2024-07-22 10:55:32.393576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.971 qpair failed and we were unable to recover it. 00:39:26.971 [2024-07-22 10:55:32.393903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.393912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.394210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.394220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.394541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.395608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.395631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.395965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.395976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.396228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.396238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.396585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.396596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.396918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.396927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.397236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.397246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.397518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.397528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.397851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.397861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.398186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.398196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.398562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.398572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.398895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.398905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.399132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.399142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.399476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.399486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.399793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.399805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.400152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.400162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.400425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.400769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.400779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.401121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.401130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.401459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.401468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.401790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.401799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.402155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.402165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.402292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.402301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.402751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.402761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.403074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.403084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.403483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.403493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.403793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.403803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.404156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.404165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.404471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.404481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.404803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.404813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.405008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.405017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.405359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.405369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.405700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.405711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.406031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.406041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.406363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.406372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.406690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.406700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.407010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.407021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.972 [2024-07-22 10:55:32.407341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.972 [2024-07-22 10:55:32.407351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.972 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.407693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.407703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.408027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.408037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.408375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.408385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.408709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.408722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.409067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.409078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.409304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.409314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.409595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.409606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.409980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.409990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.410309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.410319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.410670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.410681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.411021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.411032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.411356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.411366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.411672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.411683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.412023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.412033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.412376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.412386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.412600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.412610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.412941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.412951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.413293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.413304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.413649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.413660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.414061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.414071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.414379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.414389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.414738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.414748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.415088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.415098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.415442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.415452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.415778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.415788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.416105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.416123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.416457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.416467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.416807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.416817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.417125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.417135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.417476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.417486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.417827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.417837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.418088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.418097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.973 [2024-07-22 10:55:32.418441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.973 [2024-07-22 10:55:32.418452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.973 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.418769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.418778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.419109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.419119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.419452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.419462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.419810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.419820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.420158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.420168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.420476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.420485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.420676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.420686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.421004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.421014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.421310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.421319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.421544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.421554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.421898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.421907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.422202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.422212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.422543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.422552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.422863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.422873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.423194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.423203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.423521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.423530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.423960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.423970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.424309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.424319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.424638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.424648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.424993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.425004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.425316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.425326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.425653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.425662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.425875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.425884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.426247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.426256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.426588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.426598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.426942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.426953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.427258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.427268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.427583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.427592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.427928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.427938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.428282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.428291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.428611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.428622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.428982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.428993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.429293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.429303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.429610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.429620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.429912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.429922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.430262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.430271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.430501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.430511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.430713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.430722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.431061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.431072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.431445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.974 [2024-07-22 10:55:32.431770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.974 [2024-07-22 10:55:32.431779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.974 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.432070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.432079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.432404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.432414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.432749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.432997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.433007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.433325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.433334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.433627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.433637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.433982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.433992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.434294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.434304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.434510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.434521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.434845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.434855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.435157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.435167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.435514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.435524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.435864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.435874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.436192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.436202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.436541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.436552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.436890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.436899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.437222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.437232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.437553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.437563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.437792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.437807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.438154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.438163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.438407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.438417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.438710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.438720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.439028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.439037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.439385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.439398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.439721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.439733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.440067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.440076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.440462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.440472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.440795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.440805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.441125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.441135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.441473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.441488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.441823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.441832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.442203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.442212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.442544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.442554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.442874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.442884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.443222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.443233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.443529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.443538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.443845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.443855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.444174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.444184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.444534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.444544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.444928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.444938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.445272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.975 [2024-07-22 10:55:32.445282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.975 qpair failed and we were unable to recover it. 00:39:26.975 [2024-07-22 10:55:32.445608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.445618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.445935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.445945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.446279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.446289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.446597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.446607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.446894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.446904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.447254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.447264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.447604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.447615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.447959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.447968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.448303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.448313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.448625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.448635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.448987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.448998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.449305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.449315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.449696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.450016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.450027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.450377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.450387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.450727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.450737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.450933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.450944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.451275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.451286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.451596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.451606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.451907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.451918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.452242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.452252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.452605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.452615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.452808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.452819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.453128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.453137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.453389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.453405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.453602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.453613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.453935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.453945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.454169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.454179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.454501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.454511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.454836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.454845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.455198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.455207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.455549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.455558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.455908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.455918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.456229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.456240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.456589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.456598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.456917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.456927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.457250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.457260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.457608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.457619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.457965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.457975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.458291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.458301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.458572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.976 [2024-07-22 10:55:32.458581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.976 qpair failed and we were unable to recover it. 00:39:26.976 [2024-07-22 10:55:32.458890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.458900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.459245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.459255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.459563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.459573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.459912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.459924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.460275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.460284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.460601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.460611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.460956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.460965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.461282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.461292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.461648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.461657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.461991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.462001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.462306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.462316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.462640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.462650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.462984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.462994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.463339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.463349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.463653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.463663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.463988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.463998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.464353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.464363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.464712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.465047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.465057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.465451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.465461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.465753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.465762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.466121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.466131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.466449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.466459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.466782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.466791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.467151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.467161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.467480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.467490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.467811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.467820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.468136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.468451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.468460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.468836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.468845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.469143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.469153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.469474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.469483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.469802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.469811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.470113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.470123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.470465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.977 [2024-07-22 10:55:32.470474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.977 qpair failed and we were unable to recover it. 00:39:26.977 [2024-07-22 10:55:32.470799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.470809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.471028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.471038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.471355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.471366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.471691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.471701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.472025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.472034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.472371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.472381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.472726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.472736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.473042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.473052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.473359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.473369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.473692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.473702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.474011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.474021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.474343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.474353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.474672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.474682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.475030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.475040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.475382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.475392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.475742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.475752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.476085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.476096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.476445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.476455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.476763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.476773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.477158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.477465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.477475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.477810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.477819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.478155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.478166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.478388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.478403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.478730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.478739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.479036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.479045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.479359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.479369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.479672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.479682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.479997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.480006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.480250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.480261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.480566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.480576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.480884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.481086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.481097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.481448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.481458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.481715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.481725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.482027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.482037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.482251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.482261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.482485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.482494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.482835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.482844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.483165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.483174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.483528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.978 [2024-07-22 10:55:32.483538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.978 qpair failed and we were unable to recover it. 00:39:26.978 [2024-07-22 10:55:32.483835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.484190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.484200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.484575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.484585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.484905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.484915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.485257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.485266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.485607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.485617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.485958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.486290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.486301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.486523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.486533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.486845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.486855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.487192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.487202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.487521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.487532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.487878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.487888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.488118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.488129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.488445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.488456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.488777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.488787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.489096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.489106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.489448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.489459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.489793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.489803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.490121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.490131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.490452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.490462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.490750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.490760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.491129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.491140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.491453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.491463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.491764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.491774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.492108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.492119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.492474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.492484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.492812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.492822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.493172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.493182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.493479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.493489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.493825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.493836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.494159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.494170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.494512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.494523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.494864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.494874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.495217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.495227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.495547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.495558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.495911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.495921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.496270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.496280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.496586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.496596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.496921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.496932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.979 [2024-07-22 10:55:32.497275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.979 [2024-07-22 10:55:32.497285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.979 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.497667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.497678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.497975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.497984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.498307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.498316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.498636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.498647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.498976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.498986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.499286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.499297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.499639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.499649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.499984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.499994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.500220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.500230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.500557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.500566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.500860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.500869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.501188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.501198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.501497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.501506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.501737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.501746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.502071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.502080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.502282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.502295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.502625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.502635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.502976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.502986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.503346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.503356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.503646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.503656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.504006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.504016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.504356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.504366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.504701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.504712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.505067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.505076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.505402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.505411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.505732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.505742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.506132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.506142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.506453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.506462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.506804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.506814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.507134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.507145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.507505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.507515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.507827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.507836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.508150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.508160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.508472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.508482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.508812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.508821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.509166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.509328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.509337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.509535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.509544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.509866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.509875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.510216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.510225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.980 [2024-07-22 10:55:32.510597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.980 [2024-07-22 10:55:32.510607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.980 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.510953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.510962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.511197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.511209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.511529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.511539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.511890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.511899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.512239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.512249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.512567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.512577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.512920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.512929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.513338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.513348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.513641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.513651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.513943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.513952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.514289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.514298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.514617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.514627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.514971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.514980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.515286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.515304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.515529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.515846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.515856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.516199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.516210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.516533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.516543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.516892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.516904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.517249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.517260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.517606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.517618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.517937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.517947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.518293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.518305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.518627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.518637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.518949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.518961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.519275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.519285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.519628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.519640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.519990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.520000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.520310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.520323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.520646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.520657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.521004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.521015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.521363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.521374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.521672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.521683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.522003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.522013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.522362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.522372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.522562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.981 [2024-07-22 10:55:32.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.981 qpair failed and we were unable to recover it. 00:39:26.981 [2024-07-22 10:55:32.522889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.522899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.523224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.523235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.523576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.523587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.523932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.523943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.524269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.524279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.524616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.524629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.524968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.524979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.525327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.525337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.525661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.525672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.525995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.526005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.526342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.526354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.526740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.526751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.527078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.527088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.527412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.527423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.527770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.527780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.528071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.528083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.528399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.528410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.528735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.528746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.529085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.529097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.529436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.529781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.529793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.530116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.530128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.530454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.530465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.530779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.530790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.531111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.531122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.531445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.531456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.531717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.531728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.531939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.531951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.532276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.532287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.532625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.532636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.532942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.532952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.533302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.533656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.533668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.533990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.534001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.534325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.534336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.534660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.534671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.534989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.535000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.535277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.535287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.535599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.982 [2024-07-22 10:55:32.535610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.982 qpair failed and we were unable to recover it. 00:39:26.982 [2024-07-22 10:55:32.535958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.535968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.536254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.536264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.536575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.536586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.536924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.536935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.537275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.537286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.537638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.537650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.537968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.537980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.538322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.538333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.538656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.538668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.539003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.539013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.539334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.539345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.539659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.539671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.539902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.539913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.540237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.540249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.540570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.540582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.540889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.540901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.541245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.541256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.541598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.541609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.541906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.541917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.542216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.542227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.542548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.542561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.542898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.542911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.543251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.543261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.543617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.543631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.543927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.543938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.544255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.544266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.544459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.544471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.544743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.544754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.545099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.545109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.545430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.545442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.545778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.545789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.546088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.546099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.546449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.546459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.546802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.546814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.547130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.547141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.547488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.547499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.547804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.547816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.548128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.548138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.548464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.548476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.548798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.548808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.549151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.983 [2024-07-22 10:55:32.549163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.983 qpair failed and we were unable to recover it. 00:39:26.983 [2024-07-22 10:55:32.549482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.549493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.549805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.549817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.550122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.550133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.550483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.550494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.550825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.550837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.551160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.551170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.551490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.551502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.551800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.551812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.552140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.552152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.552472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.552483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.552838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.552848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.553202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.553213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.553533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.553544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.553864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.553875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.554217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.554228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.554573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.554584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.554903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.554913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.555222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.555234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.555529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.555541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.555889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.555900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.556222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.556234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.556559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.556570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.556891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.556902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.557248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.557259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.557604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.557615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.557933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.557944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.558286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.558297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.558619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.558630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.558949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.558960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.559279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.559290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.559630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.559958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.559970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.560292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.560303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.560639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.560650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.560990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.561001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.561302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.561314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.561662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.561673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.561995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.562317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.562328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.562649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.562660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.562983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.984 [2024-07-22 10:55:32.562995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.984 qpair failed and we were unable to recover it. 00:39:26.984 [2024-07-22 10:55:32.563307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.563319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.563670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.563682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.564020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.564031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.564343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.564354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.564676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.564688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.564994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.565006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.565385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.565401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.565700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.565711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.566035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.566046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.566406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.566417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.566722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.566732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.566923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.566933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.567285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.567615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.567628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.567970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.567980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.568289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.568301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.568645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.568656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.568998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.569008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.569352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.569364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.569684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.569695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.570020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.570032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.570376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.570387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.570716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.570727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.571045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.571056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.571375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.571386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.571726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.571738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.572080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.572091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.572414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.572426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.572778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.572789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.573094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.573105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.573451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.573462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.573640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.573650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.573928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.573938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.574282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.574292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.574626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.574640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.574898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.574909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.985 [2024-07-22 10:55:32.575227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.985 [2024-07-22 10:55:32.575239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.985 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.575587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.575597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.575918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.575929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.576252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.576263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.576574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.576585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.576920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.576930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.577270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.577281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.577633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.577645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.577963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.577973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.578312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.578324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.578600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.578610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.578988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.579000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.579320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.579333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.579643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.579655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.580011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.580022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.580342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.580353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.580688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.580700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.580892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.580905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.581225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.581237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.581557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.581567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.581888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.581899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.582242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.582253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.582571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.582583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.582928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.582938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.583194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.583205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.583393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.583417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.583668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.583680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.584005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.584016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.584337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.584348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.584665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.584676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.584983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.584995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.585346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.585357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.585680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.585692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.586009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.586019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.586361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.986 [2024-07-22 10:55:32.586371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.986 qpair failed and we were unable to recover it. 00:39:26.986 [2024-07-22 10:55:32.586706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.586719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.587053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.587064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.587421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.587733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.587744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.588066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.588077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.588401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.588411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.588740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.588751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.589059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.589071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.589446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.589457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.589775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.589787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.589980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.589991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.590334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.590345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.590670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.590681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.591007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.591018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.591362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.591374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.591714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.591725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.592123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.592134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.592321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.592334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.592661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.592899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.592910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.593234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.593565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.593577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.593931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.594286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.594297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.594684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.594694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.595011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.595023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.595364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.595376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.595717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.595730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.596051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.596062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.596428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.596439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.596754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.596765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.597096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.597107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.597417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.597429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.597757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.598112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.598122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.598463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.598475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.598791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.598801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.987 [2024-07-22 10:55:32.599120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.987 [2024-07-22 10:55:32.599131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.987 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.599476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.599487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.599833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.599845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.600164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.600175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.600469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.600480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.600808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.600819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.601109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.601120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.601440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.601740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.601752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.602078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.602089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.602436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.602447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.602745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.602755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.603073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.603084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.603435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.603446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.603761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.603773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.604085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.604096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.604412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.604424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.604738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.604749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.604967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.604977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.605298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.605309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.605643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.605655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.605845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.605857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.606184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.606196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.606538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.606550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.606834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.606846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.607171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.607181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.607526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.607537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.607881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.607893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.608201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.608211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.608520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.608532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.608912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.609275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.609286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.609629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.609640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.609982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.609993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.610335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.610346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.610653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.610665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.610992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.611003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.611344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.611355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.611648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.611660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.611925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.611936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.988 [2024-07-22 10:55:32.612250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.988 [2024-07-22 10:55:32.612261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.988 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.612574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.612586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.612934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.612945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.613267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.613278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.613658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.613670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.614014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.614024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.614364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.614375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.614704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.614717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.615037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.615049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.615260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.615271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.615593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.615603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.615923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.615934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.616253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.616264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.616580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.616592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.616899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.616910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.617227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.617239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.617555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.617567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.617910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.617921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.618122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.618132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.618348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.618359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.618691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.618702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.619051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.619062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.619366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.619377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.619686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.619696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.620016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.620027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.620371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.620383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.620714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.620724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.621042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.621053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.621388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.621406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.621735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.621745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.621936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.621947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.622288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.622298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.622635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.622647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.622988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.622999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.623337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.623348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.623682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.623697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.624087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.989 [2024-07-22 10:55:32.624098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.989 qpair failed and we were unable to recover it. 00:39:26.989 [2024-07-22 10:55:32.624416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.624428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.624751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.624761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.625073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.625085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.625406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.625417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.625643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.625653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.625967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.625977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.626288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.626300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.626617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.626629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.626968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.626979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.627323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.627335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.627658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.627669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.627994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.628006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.628352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.628363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.628717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.628728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.629045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.629057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.629329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.629341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.629666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.629678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.630055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.630067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.630377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.630388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.630728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.630740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.631084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.631095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.631440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.631451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.631773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.631784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.632158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.632168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.632480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.632490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.632804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.632815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.633136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.633147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.633470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.633480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.633834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.633845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.634151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.634162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.634487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.634497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.634855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.634866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.635208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.635219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.635511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.635522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.635864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.635874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.636213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.990 [2024-07-22 10:55:32.636223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.990 qpair failed and we were unable to recover it. 00:39:26.990 [2024-07-22 10:55:32.636541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.636552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.636891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.636902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.637120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.637130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.637371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.637382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.637739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.638091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.638103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.638425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.638437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.638758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.638768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.638997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.639007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.639320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.639330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.639526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.639537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.639824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.639835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.640176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.640187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.640518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.640529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.640830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.640841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.641157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.641169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.641516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.641527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.641855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.641866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.642187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.642199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.642556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.642567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.642858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.642868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.643186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.643196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.643518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.643530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.643859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.643869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.644214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.644225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.644553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.644564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.644882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.644893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.645213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.645225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.645567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.645578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.645878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.645890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.646201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.646213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.646524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.646536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.646876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.646886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.647109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.647120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.647444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.647455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.991 qpair failed and we were unable to recover it. 00:39:26.991 [2024-07-22 10:55:32.647780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.991 [2024-07-22 10:55:32.647791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.648141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.648151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.648480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.648491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.648838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.648849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.649186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.649197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.649497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.649799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.649809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.650125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.650135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.650463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.650475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.650828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.651186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.651197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.651522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.651532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.651868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.651879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.652123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.652134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.652429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.652440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:26.992 [2024-07-22 10:55:32.652621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.992 [2024-07-22 10:55:32.652632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:26.992 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.652991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.653004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.653345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.653358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.653673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.653684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.654006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.654018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.654336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.654348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.654646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.654657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.654996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.655011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.655328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.655339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.655628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.655640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.655990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.656001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.656352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.656364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.656690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.265 [2024-07-22 10:55:32.656702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.265 qpair failed and we were unable to recover it. 00:39:27.265 [2024-07-22 10:55:32.657037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.657049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.657391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.657410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.657708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.657719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.658037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.658048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.658368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.658380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.658704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.658716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.659057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.659068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.659390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.659405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.659768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.659779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.660120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.660130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.660421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.660432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.660749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.660759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.661067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.661078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.661385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.661400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.661749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.661760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.662078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.662090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.662274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.662286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.662597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.662609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.662956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.662966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.663289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.663301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.663620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.663631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.663921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.663938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.664278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.664289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.664484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.664496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.664822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.665177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.665189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.665502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.665513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.665849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.665860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.666050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.666062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.666353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.666363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.666682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.666694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.667089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.667099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.667386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.667402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.667729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.667739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.668080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.668090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.668409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.668420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.668734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.668744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.669089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.669100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.669453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.669465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.669807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.669817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.670136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.266 [2024-07-22 10:55:32.670147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.266 qpair failed and we were unable to recover it. 00:39:27.266 [2024-07-22 10:55:32.670526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.670537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.670844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.670855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.671176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.671187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.671505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.671517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.671890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.671901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.672212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.672223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.672561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.672572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.672898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.672909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.673265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.673277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.673612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.673623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.673962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.673972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.674285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.674296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.674630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.674641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.674984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.674994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.675313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.675324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.675647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.675658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.676004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.676015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.676356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.676366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.676715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.676726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.677047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.677058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.677365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.677375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.677773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.677785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.678001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.678011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.678364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.678376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.678719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.678730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.679074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.679085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.679405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.679416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.679657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.679667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.679959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.679971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.680315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.680325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.680664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.680675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.680998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.681008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.681353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.681363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.681708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.681719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.682056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.682068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.682407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.682418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.682727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.682739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.683080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.683091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.683403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.683415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.683730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.267 [2024-07-22 10:55:32.683740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.267 qpair failed and we were unable to recover it. 00:39:27.267 [2024-07-22 10:55:32.684034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.684045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.684424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.684435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.684748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.684759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.685087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.685098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.685447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.685458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.685687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.685698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.685991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.686003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.686329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.686340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.686566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.686890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.686902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.687222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.687233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.687597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.687608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.687907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.687918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.688229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.688240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.688540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.688551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.688852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.688863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.689182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.689194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.689502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.689513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.689853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.689863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.690158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.690169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.690405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.690416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.690732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.690743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.691062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.691073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.691477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.691488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.691802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.691813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.692046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.692057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.692370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.692382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.692691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.692702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.693042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.693053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.693361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.693372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.693758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.693769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.694088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.694100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.694447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.694458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.694774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.694784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.695104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.695115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.695436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.695449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.695756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.695766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.696115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.696126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.696455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.696465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.696790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.696800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.268 [2024-07-22 10:55:32.697145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.268 [2024-07-22 10:55:32.697156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.268 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.697498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.697510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.697855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.697865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.698186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.698196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.698545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.698556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.698933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.698943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.699297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.699307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.699642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.699654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.699992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.700003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.700357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.700369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.700707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.700718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.701042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.701054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.701430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.701442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.701748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.701758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.702079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.702089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.702413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.702424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.702754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.702765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.703107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.703118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.703437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.703449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.703669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.703679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.703991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.704002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.704353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.704364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.704673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.704685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.704913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.704923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.705266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.705277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.705610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.705620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.705956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.705966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.706291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.706301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.706617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.706627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.706969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.706979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.707166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.707177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.707392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.707407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.707725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.707736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.708085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.708096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.708421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.708431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.708665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.708676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.708994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.709005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.709319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.269 [2024-07-22 10:55:32.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.269 qpair failed and we were unable to recover it. 00:39:27.269 [2024-07-22 10:55:32.709656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.709667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.709980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.709991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.710338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.710349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.710699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.710711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.710900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.710912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.711273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.711284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.711593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.711605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.711961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.711973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.712291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.712302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.712655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.712667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.713001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.713014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.713348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.713360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.713681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.713693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.714030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.714042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.714383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.714394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.714730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.714742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.715067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.715078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.715403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.715415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.715737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.715749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.716084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.716095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.716421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.716432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.716813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.716825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.717170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.717182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.717505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.717517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.717850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.717861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.718187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.718201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.718574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.718586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.718934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.718945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.719323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.719335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.719648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.719660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.719998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.720010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.720356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.720368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.720800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.720812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.721129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.721332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.721344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.721682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.721694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.722016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.722027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.722352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.270 [2024-07-22 10:55:32.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.270 qpair failed and we were unable to recover it. 00:39:27.270 [2024-07-22 10:55:32.722604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.722615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.722928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.722939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.723281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.723292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.723615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.723626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.723961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.723971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.724314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.724325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.724658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.724669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.725015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.725025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.725263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.725650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.725661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.725885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.725895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.726113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.726124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.726431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.726442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.726754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.726765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.727096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.727428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.727438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.727758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.727769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.728147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.728158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.728477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.728488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.728804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.728815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.729156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.729166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.729516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.729528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.729874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.729885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.730206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.730218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.730532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.730543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.730893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.730904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.731222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.731232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.731557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.731567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.731871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.731882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.732184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.732194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.732511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.732522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.732840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.732850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.733153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.733163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.733444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.733455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.733746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.733758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.734118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.734128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.734473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.734484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.734828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.734838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.735132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.735144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.735470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.735480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.271 [2024-07-22 10:55:32.735798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.271 [2024-07-22 10:55:32.735811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.271 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.736150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.736162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.736463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.736474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.736790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.736800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.737146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.737156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.737515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.737526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.737840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.737850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.738168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.738180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.738524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.738535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.738846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.738858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.739074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.739084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.739423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.739435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.739816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.739827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.740045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.740055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.740279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.740289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.740601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.740612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.740957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.740967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.741305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.741317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.741645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.741656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.741975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.742329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.742340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.742660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.742672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.742993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.743004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.743312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.743324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.743637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.743648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.743999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.744011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.744332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.744343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.744681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.744692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.745024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.745036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.745380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.745391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.745710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.745722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.746042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.746053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.746399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.746410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.746603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.746614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.746945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.746956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.747280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.747290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.747593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.747604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.747949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.748268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.748278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.748597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.748608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.748948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.748959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.272 [2024-07-22 10:55:32.749299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.272 [2024-07-22 10:55:32.749310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.272 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.749651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.749663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.750017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.750344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.750356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.750624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.750635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.750959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.750971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.751292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.751302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.751623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.751636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.751974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.751985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.752277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.752288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.752636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.752647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.752991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.753002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.753344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.753356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.753683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.753694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.754017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.754028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.754371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.754382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.754593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.754605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.754926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.754937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.755257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.755268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.755709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.755722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.756011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.756025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.756344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.756355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.756528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.756540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.756928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.756939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.757243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.757255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.757564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.757575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.757929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.757939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.758232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.758242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.758547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.758560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.758890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.758900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.759226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.759236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.759574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.759584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.759927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.759938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.760259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.760269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.760589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.760600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.760943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.760955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.761301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.761312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.761644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.761655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.761977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.761989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.762323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.762334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.762715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.762726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.273 [2024-07-22 10:55:32.763057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.273 [2024-07-22 10:55:32.763069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.273 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.763390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.763406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.763744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.763755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.764052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.764062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.764403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.764414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.764743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.764754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.765097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.765107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.765447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.765459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.765798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.765809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.766129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.766140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.766482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.766493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.766835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.766846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.767167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.767179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.767522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.767532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.767874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.767887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.768233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.768245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.768563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.768573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.768884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.768895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.769243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.769253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.769595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.769607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.769829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.769840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.770158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.770169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.770519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.770910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.770921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.771238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.771249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.771618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.771629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.771972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.771983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.772327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.772339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.772659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.772670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.772990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.773001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.773345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.773355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.773698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.774032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.774043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.774359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.774370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.774718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.774729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.775073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.775084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.274 qpair failed and we were unable to recover it. 00:39:27.274 [2024-07-22 10:55:32.775400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.274 [2024-07-22 10:55:32.775411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.775752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.775763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.776138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.776149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.776455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.776466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.776790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.776801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.777118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.777130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.777471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.777483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.777826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.777838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.778207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.778217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.778609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.778620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.778932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.778943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.779282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.779293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.779643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.779654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.779973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.779983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.780324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.780335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.780660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.780672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.780996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.781312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.781324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.781713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.781724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.782038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.782050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.782402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.782414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.782734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.782745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.783090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.783100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.783314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.783324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.783646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.783657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.783868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.784208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.784219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.784546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.784862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.784874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.785200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.785211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.785554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.785566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.785908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.785919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.786241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.786251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.786497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.786507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.786751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.786761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.787100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.787110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.787433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.787771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.787781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.788131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.788142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.788482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.788493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.788814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.788825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.275 qpair failed and we were unable to recover it. 00:39:27.275 [2024-07-22 10:55:32.789145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.275 [2024-07-22 10:55:32.789156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.789466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.789477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.789798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.789809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.790128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.790140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.790463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.790474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.790699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.790711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.791029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.791039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.791376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.791387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.791684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.791697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.792038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.792049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.792392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.792407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.792727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.792739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.793059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.793069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.793386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.793406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.793719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.793730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.794074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.794084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.794409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.794420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.794611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.794623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.794935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.794946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.795250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.795261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.795444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.795455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.795756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.795767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.796078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.796090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.796410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.796756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.796767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.797110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.797122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.797431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.797442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.797777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.797787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.798128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.798139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.798484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.798496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.798848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.798858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.799181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.799192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.799546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.799562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.799880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.799890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.800232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.800243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.800562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.800573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.800629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.800640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.800951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.800961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.801270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.801282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.801620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.801631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.801950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.801961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.276 [2024-07-22 10:55:32.802285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.276 [2024-07-22 10:55:32.802296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.276 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.802622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.802634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.802976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.802987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.803305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.803317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.803638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.803649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.803948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.803960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.804257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.804269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.804583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.804594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.804879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.804891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.805231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.805242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.805562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.805574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.805913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.805923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.806243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.806254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.806594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.806605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.806958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.806968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.807292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.807304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.807581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.807591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.807965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.807976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.808283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.808296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.808641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.808652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.808978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.808988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.809329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.809340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.809699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.809710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.810038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.810049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.810370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.810381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.810705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.810716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.811058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.811070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.811398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.811409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.811745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.811755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.812098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.812108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.812454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.812466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.812819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.812829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.813151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.813162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.813507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.813518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.813833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.813845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.814177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.814188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.814507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.814518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.814861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.814871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.815215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.815226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.815547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.815937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.277 [2024-07-22 10:55:32.815948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.277 qpair failed and we were unable to recover it. 00:39:27.277 [2024-07-22 10:55:32.816259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.816271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.816585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.816596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.816912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.816924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.817245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.817257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.817582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.817943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.817953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.818275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.818286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.818624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.818635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.818983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.818994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.819298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.819308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.819658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.819669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.819880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.819890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.820205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.820216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.820529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.820541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.820860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.820871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.821196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.821207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.821552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.821563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.821878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.821888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.822212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.822223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.822543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.822554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.822902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.822913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.823263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.823273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.823590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.823601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.823924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.823934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.824279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.824290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.824633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.824644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.824962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.824974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.825295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.825307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.825624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.825634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.825973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.825985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.826311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.826322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.826634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.826645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.826997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.827009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.827392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.827708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.827718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.278 [2024-07-22 10:55:32.828052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.278 [2024-07-22 10:55:32.828063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.278 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.828383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.828393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.828754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.828765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.829081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.829093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.829391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.829410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.829731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.829742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.830089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.830100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.830281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.830291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.830602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.830612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.830903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.830913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.831259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.831271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.831458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.831468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.831822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.831833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.832176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.832187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.832515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.832526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.832856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.832866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.833222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.833234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.833575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.833587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.833926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.833936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.834254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.834265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.834576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.834587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.834931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.834941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.835283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.835295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.835482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.835495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.835716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.835727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.836073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.836331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.836342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.836662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.836674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.836999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.837010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.837357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.837369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.837690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.837701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.838053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.838065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.838379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.838390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.838701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.838712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.839074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.839391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.839411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.839693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.839704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.839897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.840212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.840546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.840559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.840878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.840888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.279 [2024-07-22 10:55:32.841237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.279 [2024-07-22 10:55:32.841247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.279 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.841549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.841560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.841899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.841910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.842212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.842224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.842411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.842422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.842726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.842736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.843054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.843065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.843467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.843478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.843784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.843795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.843976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.843989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.844229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.844240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.844569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.844580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.844925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.844936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.845274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.845584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.845595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.845911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.845923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.846223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.846234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.846534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.846547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.846885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.846896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.847212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.847223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.847573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.847583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.847780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.847791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.848095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.848106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.848425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.848436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.848766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.848777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.849117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.849127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.849448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.849459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.849785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.849795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.850138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.850149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.850465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.850476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.850795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.850805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.851130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.851140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.851490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.851502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.851765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.851775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.852099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.852110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.852445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.852456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.852832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.852843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.853155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.853165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.853405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.853415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.853748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.853759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.854101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.854112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.854461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.280 [2024-07-22 10:55:32.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.280 qpair failed and we were unable to recover it. 00:39:27.280 [2024-07-22 10:55:32.854798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.854809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.855128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.855139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.855481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.855492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.855816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.855827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.856144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.856155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.856472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.856484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.856837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.856848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.857190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.857201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.857530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.857541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.857876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.857886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.858238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.858249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.858558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.858569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.858893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.858904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.859217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.859229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.859571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.859582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.859922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.859933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.860299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.860310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.860638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.860650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.860957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.860969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.861310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.861322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.861648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.861660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.861980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.861991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.862341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.862355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.862672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.862685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.863018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.863032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.863349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.863370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.863681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.863692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.863970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.863980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.864300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.864310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.864606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.864617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.864941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.864951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.865300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.865311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.865511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.865522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.865857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.865868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.866209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.866219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.866543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.866555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.866914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.866925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.867144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.867155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.867456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.867467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.867822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.867832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.868155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.281 [2024-07-22 10:55:32.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.281 qpair failed and we were unable to recover it. 00:39:27.281 [2024-07-22 10:55:32.868479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.868490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.868843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.868854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.869197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.869209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.869541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.869553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.869872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.869882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.870109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.870120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.870441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.870454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.870742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.870753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.871073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.871087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.871439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.871450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.871776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.871793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.872125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.872135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.872455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.872772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.872783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.873124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.873135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.873369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.873379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.873695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.873706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.874053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.874064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.874394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.874410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.874740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.874751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.875070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.875081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.875422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.875432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.875765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.875776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.876075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.876086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.876414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.876425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.876766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.876778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.877120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.877131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.877453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.877465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.877783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.877793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.878139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.878149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.878476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.878487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.878807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.878817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.879137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.879148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.879495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.879506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.879828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.282 [2024-07-22 10:55:32.880140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.282 [2024-07-22 10:55:32.880152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.282 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.880406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.880416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.880747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.880757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.881097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.881454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.881466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.881685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.881696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.882016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.882027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.882256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.882267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.882576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.882587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.882913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.882924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.883271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.883282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.883600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.883612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.883948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.883959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.884281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.884291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.884629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.884640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.884991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.885001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.885321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.885332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.885654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.885665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.886007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.886018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.886320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.886644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.886655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.886974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.886985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.887303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.887313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.887626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.887637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.887970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.887982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.888297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.888309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.888620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.888631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.888860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.888870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.889182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.889194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.889525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.889536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.889922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.889932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.890240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.890251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.890573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.890584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.890907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.890918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.891248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.891259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.891571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.891581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.891920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.891930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.892251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.892262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.892576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.892586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.892928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.892939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.893261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.283 [2024-07-22 10:55:32.893271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.283 qpair failed and we were unable to recover it. 00:39:27.283 [2024-07-22 10:55:32.893491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.893504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.893691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.893701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.894047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.894057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.894354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.894366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.894692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.894703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.894986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.894996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.895340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.895350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.895669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.895681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.895996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.896007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.896297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.896308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.896535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.896546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.896842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.896852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.897067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.897077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.897400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.897412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.897765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.897775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.897987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.897997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.898317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.898327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.898546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.898556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.898823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.898834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.899142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.899154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.899469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.899480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.899830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.899841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.900181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.900191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.900381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.900391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.900683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.900694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.901037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.901049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.901389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.901408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.901699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.901712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.901921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.902251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.902261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.902480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.902490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.902837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.902848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.903165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.903175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.903512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.903523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.903821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.903831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.904157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.904167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.904486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.904497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.904843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.904853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.905202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.905214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.905539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.905550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.905880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.905890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.284 qpair failed and we were unable to recover it. 00:39:27.284 [2024-07-22 10:55:32.906214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.284 [2024-07-22 10:55:32.906224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.906564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.906575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.906901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.906912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.907231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.907241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.907584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.907595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.907937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.907948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.908269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.908279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.908617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.908629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.908975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.908986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.909327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.909338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.909631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.909958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.909968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.910197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.910207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.910520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.910534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.910891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.910902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.911058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.911069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.911406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.911417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.911759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.911771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.912126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.912137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.912406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.912417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.912734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.912745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.913094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.913104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.913355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.913365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.913661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.913672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.913844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.913855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.914154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.914165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.914484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.914495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.914813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.914825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.915201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.915211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.915555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.915566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.915912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.915923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.916242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.916254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.916556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.916566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.916908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.916918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.917243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.917254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.917573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.917584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.917928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.917938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.918276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.918288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.918675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.918686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.919010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.919021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.919328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.919338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.285 qpair failed and we were unable to recover it. 00:39:27.285 [2024-07-22 10:55:32.919683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.285 [2024-07-22 10:55:32.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.920037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.920048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.920367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.920377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.920566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.920578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.920916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.920928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.921247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.921257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.921579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.921589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.921942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.921952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.922250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.922261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.922591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.922603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.922923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.922935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.923280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.923291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.923642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.923654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.923972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.923982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.924307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.924317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.924657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.924669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.925016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.925026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.925387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.925402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.925721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.925733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.926056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.926067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.926410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.926422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.926755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.926765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.927033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.927043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.927370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.927380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.927701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.927712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.928035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.928046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.928369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.928379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.928738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.928749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.929083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.929093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.929410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.929420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.929754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.929765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.930058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.930069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.930415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.930427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.930757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.930767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.931088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.931099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.931445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.931457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.931771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.931782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.932128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.932138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.932462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.932472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.932816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.932826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.933144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.933159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.286 qpair failed and we were unable to recover it. 00:39:27.286 [2024-07-22 10:55:32.933487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.286 [2024-07-22 10:55:32.933497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.933651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.933662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.933970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.933981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.934361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.934372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.934694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.934705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.935033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.935044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.935391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.935409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.935727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.935737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.936065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.936075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.936393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.936407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.936735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.936746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.937102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.937113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.937476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.937486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.937806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.937818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.938160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.938171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.938510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.938522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.938863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.938874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.939203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.939214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.939556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.939566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.939871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.939883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.940197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.940208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.940529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.940540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.940880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.940890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.941197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.941208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.941525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.941537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.941869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.941879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.942222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.942234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.942585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.942596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.942929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.942940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.943254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.943265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.943586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.943598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.943946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.943956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.944272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.944284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.944631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.944642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.944947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.944957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.945244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.945255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.287 qpair failed and we were unable to recover it. 00:39:27.287 [2024-07-22 10:55:32.945573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.287 [2024-07-22 10:55:32.945584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.945925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.946283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.946294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.946628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.946639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.946974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.946984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.947302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.947314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.947644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.947655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.947953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.947964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.948265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.948582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.948594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.948935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.948945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.949287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.949298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.949638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.949649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.949862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.949872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.950200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.950210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.950562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.950574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.950907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.950918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.288 [2024-07-22 10:55:32.951236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.288 [2024-07-22 10:55:32.951250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.288 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.951571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.951583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.951937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.951949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.952163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.952173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.952507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.952517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.952870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.952881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.953221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.953231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.953551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.953562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.953890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.953900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.954244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.954255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.954557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.954567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.954932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.954943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.955254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.955264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.955580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.955591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.955916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.955927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.956296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.956307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.956646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.956658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.957002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.957013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.957355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.957367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.957679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.957690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.559 qpair failed and we were unable to recover it. 00:39:27.559 [2024-07-22 10:55:32.958010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.559 [2024-07-22 10:55:32.958021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.958370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.958382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.958709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.958722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.959041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.959053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.959374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.959386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.959709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.959720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.960062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.960073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.960386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.960403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.960730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.960740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.961083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.961094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.961438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.961449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.961840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.961851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.962170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.962182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.962528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.962538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.962880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.962891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.963210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.963221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.963540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.963551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.963893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.963904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.964253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.964582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.964594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.964913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.964924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.965243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.965255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.965565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.965576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.965762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.965773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.966102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.966112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.966404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.966415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.966733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.966743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.967062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.967073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.967341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.967351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.967697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.967708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.967923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.967933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.968263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.968273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.968599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.968611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.968987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.968998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.969307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.969319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.969649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.969660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.969994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.970006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.970345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.970355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.970702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.970713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.970998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.560 [2024-07-22 10:55:32.971009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.560 qpair failed and we were unable to recover it. 00:39:27.560 [2024-07-22 10:55:32.971327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.971339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.971569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.971581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.971929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.972170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.972181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.972494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.972505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.972799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.972810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.973155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.973166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.973475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.973488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.973806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.973819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.974011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.974022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.974338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.974349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.974683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.974982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.974992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.975322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.975333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.975562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.975572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.975864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.976199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.976210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.976595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.976902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.976912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.977255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.977266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.977487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.977497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.977718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.977730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.977850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.977861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.978172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.978182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.978498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.978510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.978818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.978828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.979174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.979588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.979599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.979911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.979922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.980200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.980211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.980563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.980574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.980890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.980902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.981237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.981247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.981482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.981494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.981771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.981782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.982103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.982117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.982330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.982342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.982646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.982658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.983006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.983017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.983337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.983348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.561 qpair failed and we were unable to recover it. 00:39:27.561 [2024-07-22 10:55:32.983753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.561 [2024-07-22 10:55:32.983765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.984065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.984077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.984408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.984419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.984740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.984751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.985075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.985086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.985429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.985440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.985786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.985797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.986116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.986127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.986470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.986482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.986807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.986819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.987165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.987177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.987524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.987535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.987851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.987862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.988158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.988170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.988507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.988518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.988859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.988869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.989190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.989200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.989544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.989555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.989867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.989878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.990280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.990290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.990599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.990610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.990951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.990961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.991308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.991658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.992007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.992018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.992364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.992376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.992706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.992717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.993041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.993051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.993371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.993381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.993624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.993635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.993941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.993951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.994273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.994285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.994484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.994496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.994817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.994827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.995101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.995111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.995433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.995444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.995771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.995782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.996124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.996134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.996473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.996483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.996688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.996698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.997015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.997026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.562 qpair failed and we were unable to recover it. 00:39:27.562 [2024-07-22 10:55:32.997364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.562 [2024-07-22 10:55:32.997374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.997726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.998077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.998087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.998408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.998419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.998759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.998769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.999124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.999135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.999456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.999476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:32.999788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:32.999799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.000145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.000156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.000493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.000504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.000825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.000836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.001165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.001176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.001366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.001378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.001691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.001702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.002023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.002035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.002352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.002363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.002678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.002689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.002996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.003007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.003320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.003331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.003636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.003646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.003991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.004002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.004352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.004362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.004706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.004719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.005031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.005042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.005383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.005393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.005817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.005828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.006145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.006156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.006318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.006330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.006673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.006684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.007063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.007074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.007392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.007407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.007702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.007712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.008055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.008066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.008241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.008253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.563 qpair failed and we were unable to recover it. 00:39:27.563 [2024-07-22 10:55:33.008594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.563 [2024-07-22 10:55:33.008605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.008938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.008949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.009292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.009302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.009577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.009588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.009917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.009928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.010237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.010247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.010569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.010580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.010919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.011257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.011268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.011576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.011586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.011780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.012094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.012106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.012443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.012454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.012653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.012663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.012977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.012987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.013333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.013346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.013691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.013702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.013927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.013938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.014251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.014262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.014599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.014610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.014936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.014947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.015270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.015280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.015617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.015628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.015982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.015993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.016338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.016348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.016659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.016670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.017009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.017019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.017362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.017372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.017749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.018083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.018094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.018443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.018454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.018801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.018813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.019105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.019443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.019454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.019764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.019774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.020115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.020125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.020410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.020421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.020739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.020750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.021052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.021062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.021405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.021417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.021743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.564 [2024-07-22 10:55:33.021754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.564 qpair failed and we were unable to recover it. 00:39:27.564 [2024-07-22 10:55:33.022087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.022098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.022422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.022434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.022722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.022732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.023059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.023070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.023374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.023385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.023743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.023753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.023974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.023985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.024306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.024317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.024651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.024662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.025004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.025015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.025361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.025372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.025702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.025713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.025927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.025938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.026219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.026230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.026560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.026571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.026913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.026925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.027266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.027277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.027518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.027879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.027890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.028140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.028152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.028425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.028436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.028806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.028817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.029136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.029147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.029464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.029474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.029783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.029794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.030132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.030142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.030487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.030497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.030799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.030809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.031126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.031137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.031481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.031492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.031848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.031859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.032186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.032197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.032645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.032655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.032981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.032992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.033374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.033384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.033572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.033584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.033921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.033932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.034280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.034290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.034533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.034545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.034753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.034764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.565 [2024-07-22 10:55:33.035216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.565 [2024-07-22 10:55:33.035226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.565 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.035450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.035461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.035789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.035802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.036126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.036459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.036469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.036860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.036871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.037210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.037221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.037451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.037462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.037755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.037765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.038087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.038098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.038313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.038323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.038633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.038644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.038964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.038974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.039322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.039333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.039666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.039677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.040024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.040034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.040360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.040371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.040697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.040708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.041054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.041065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.041439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.041450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.041782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.041793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.042104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.042115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.042456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.042468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.042817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.042827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.043163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.043173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.043521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.043532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.043828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.044155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.044165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.044479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.044490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.044784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.044796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.045139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.045150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.045377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.045387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.045707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.045718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.046060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.046071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.046417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.046428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.046808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.046819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.047146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.047457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.047468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.047778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.047788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.048123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.048135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.048476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.048487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.566 qpair failed and we were unable to recover it. 00:39:27.566 [2024-07-22 10:55:33.048788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.566 [2024-07-22 10:55:33.048799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.049138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.049148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.049482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.049493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.049824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.049834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.050154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.050165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.050504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.050515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.050845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.050855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.051177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.051188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.051404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.051415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.051640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.051652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.051964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.051974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.052119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.052130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.052427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.052437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.052761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.052772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.053097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.053107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.053436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.053665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.053966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.053976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.054170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.054182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.054554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.054908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.054919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.055260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.055270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.055615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.055626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.055946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.055957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.056297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.056308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.056653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.056663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.056991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.057002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.057314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.057325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.057653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.057664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.057979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.057991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.058304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.058314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.058644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.058655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.059000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.059010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.059359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.059369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.059719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.059729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.060049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.060060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.060370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.060381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.060700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.060712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.061033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.061044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.061370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.061381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.061702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.061713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.567 [2024-07-22 10:55:33.062024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.567 [2024-07-22 10:55:33.062034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.567 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.062387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.062402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.062823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.062996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.063007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.063354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.063365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.063560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.063571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.063778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.063789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.064081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.064092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.064473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.064484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.064822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.064832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.065153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.065163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.065506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.065517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.065835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.065845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.066166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.066176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.066476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.066488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.066796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.066806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.067135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.067146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.067475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.067486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.067829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.067839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.068190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.068201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.068547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.068558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.068889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.068899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.069257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.069268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.069587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.069598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.069940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.069951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.070143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.070467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.070478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.070787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.070797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.071140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.071151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.071484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.071495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.071823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.071834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.072179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.072190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.072520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.072531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.072843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.072854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.073173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.073184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.073439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.073450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.568 qpair failed and we were unable to recover it. 00:39:27.568 [2024-07-22 10:55:33.073641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.568 [2024-07-22 10:55:33.073652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.073994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.074005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.074323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.074333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.074653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.074664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.074969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.074980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.075308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.075318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.075685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.075697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.075991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.076002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.076341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.076352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.076680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.076690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.077016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.077026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.077369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.077379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.077696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.077707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.078029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.078040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.078360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.078370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.078683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.078694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.079070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.079081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.079393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.079408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.079730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.079741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.080044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.080054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.080403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.080414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.080743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.080753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.081075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.081086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.081427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.081438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.081795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.081805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.082146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.082156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.082473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.082484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.082781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.082791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.083010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.083020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.083340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.083350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.083652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.083663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.083994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.084005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.084350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.084360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.084594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.084607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.084926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.084937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.085284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.085294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.085620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.085631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.085955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.085966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.086298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.086309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.086630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.086982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.569 [2024-07-22 10:55:33.086992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.569 qpair failed and we were unable to recover it. 00:39:27.569 [2024-07-22 10:55:33.087186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.087197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.087613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.087623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.087933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.087943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.088289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.088299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.088585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.088597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.088917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.088928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.089270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.089280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.089597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.089608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.089928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.089938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.090258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.090269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.090613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.090623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.090911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.090922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.091245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.091256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.091577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.091588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.091778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.091790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.092106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.092116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.092432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.092443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.092768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.092779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.093125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.093135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.093453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.093464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.093774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.093785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.094105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.094116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.094467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.094477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.094831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.094842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.095162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.095174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.095494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.095505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.095848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.095858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.096204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.096215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.096538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.096548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.096900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.096911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.097256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.097267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.097598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.097609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.097903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.097913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.098241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.098252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.098592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.098603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.098944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.098956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.099050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.099060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.099385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.099407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.099733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.099744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.100089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.100100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.100386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.570 [2024-07-22 10:55:33.100400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.570 qpair failed and we were unable to recover it. 00:39:27.570 [2024-07-22 10:55:33.100703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.100713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.101090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.101101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.101450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.101461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.101779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.101790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.102113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.102124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.102467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.102477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.102828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.103159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.103169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.103541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.103551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.103855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.103865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.104207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.104217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.104540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.104550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.104924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.104934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.105245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.105255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.105590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.105601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.105942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.105952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.106274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.106284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.106619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.106629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.106976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.106987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.107301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.107314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.107630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.107642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.107949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.107960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.108303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.108313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.108662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.108673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.108994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.109004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.109349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.109359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.109685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.109696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.110015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.110025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.110349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.110360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.110705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.110717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.110908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.110920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.111214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.111225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.111547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.111558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.111902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.111913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.112268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.112278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.112546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.112557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.112886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.112897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.113239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.113249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.113628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.113640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.113829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.571 [2024-07-22 10:55:33.114230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.571 [2024-07-22 10:55:33.114241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.571 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.114584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.114594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.114943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.114953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.115357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.115368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.115690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.115701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.116005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.116015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.116361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.116373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.116711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.116722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.117043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.117053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.117402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.117414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.117740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.117751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.118073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.118404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.118415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.118756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.118767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.119098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.119108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.119444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.119455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.119775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.119786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.120127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.120138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.120478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.120489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.120708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.120719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.121036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.121046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.121394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.121408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.121764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.122086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.122097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.122424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.122435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.122744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.122754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.123099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.123110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.123433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.123443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.123655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.123666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.123981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.123991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.124337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.124347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.124656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.124667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.124988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.124998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.125189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.125203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.125515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.125526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.572 [2024-07-22 10:55:33.125867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.572 [2024-07-22 10:55:33.125877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.572 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.126199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.126209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.126551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.126562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.126903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.126913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.127281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.127292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.127615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.127626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.127952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.127963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.128311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.128323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.128666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.128987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.128998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.129217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.129227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.129570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.129580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.129925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.129936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.130259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.130270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.130611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.130622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.130924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.130936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.131257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.131268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.131593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.131604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.131903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.131914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.132120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.132132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.132491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.132501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.132822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.132833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.133176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.133186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.133480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.133491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.133813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.133824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.134144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.134155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.134500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.134511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.134855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.134866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.135180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.135191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.135528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.135538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.135883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.135893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.136226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.136237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.136482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.136493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.136722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.136732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.137022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.137033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.137377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.137387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.137782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.137792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.138101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.138111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.138454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.138465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.138811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.138822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.139140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.139151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.573 [2024-07-22 10:55:33.139471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.573 [2024-07-22 10:55:33.139482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.573 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.139780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.139791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.140134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.140144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.140468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.140479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.140802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.140813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.141156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.141167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.141514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.141524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.141759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.141769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.142092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.142102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.142443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.142454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.142805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.142815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.143153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.143164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.143488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.143500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.143844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.143854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.144203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.144213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.144592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.144603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.144921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.144932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.145210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.145221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.145553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.145564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.145884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.145895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.146206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.146216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.146563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.146573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.146920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.146930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.147252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.147262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.147550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.147560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.147861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.147873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.148219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.148229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.148570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.148581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.148898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.148909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.149228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.149240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.149431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.149443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.149747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.149757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.150072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.150082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.150424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.150435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.150657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.150667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.151007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.151017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.151339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.151349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.151702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.151713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.152032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.152043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.152376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.152386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.152711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.152722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.574 [2024-07-22 10:55:33.153069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.574 [2024-07-22 10:55:33.153080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.574 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.153287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.153298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.153638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.153649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.153959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.153970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.154312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.154322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.154666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.154676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.154976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.154987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.155300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.155311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.155652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.155663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.156004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.156015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.156337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.156347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.156542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.156555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.156875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.156885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.157230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.157240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.157646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.157657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.157839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.157851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.158159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.158170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.158468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.158479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.158806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.159138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.159495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.159506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.159821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.159832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.160153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.160164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.160476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.160488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.160799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.160810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.161155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.161165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.161492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.161502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.161814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.161824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.162122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.162132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.162462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.162473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.162793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.162804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.162994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.163006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.163358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.163368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.163712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.163723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.164055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.164065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.164383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.164397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.164719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.165075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.165085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.165407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.165418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.165762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.165772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.166113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.166123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.575 [2024-07-22 10:55:33.166470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.575 [2024-07-22 10:55:33.166481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.575 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.166802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.166812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.167136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.167146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.167340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.167352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.167652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.167663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.167986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.167996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.168313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.168323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.168654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.168665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.169008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.169018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.169338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.169348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.169671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.169682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.170024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.170034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.170379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.170390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.170707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.170718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.171038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.171049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.171393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.171408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.171715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.171725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.172050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.172062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.172383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.172394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.172707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.172718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.173061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.173072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.173419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.173430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.173753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.173764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.174108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.174119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.174458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.174469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.174791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.174801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.175120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.175130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.175477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.175488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.175826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.175836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.176167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.176178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.176589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.176601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.176911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.176922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.177261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.177271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.177609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.177620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.177940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.177950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.178301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.576 qpair failed and we were unable to recover it. 00:39:27.576 [2024-07-22 10:55:33.178629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.576 [2024-07-22 10:55:33.178640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.178934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.178945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.179275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.179287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.179640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.179652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.179996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.180007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.180350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.180361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.180583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.180594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.180896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.180906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.181251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.181261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.181605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.181616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.181806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.181817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.182109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.182120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.182471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.182482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.182803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.182814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.183135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.183146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.183486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.183497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.183840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.183850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.184070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.184081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.184412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.184423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.184654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.184665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.184951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.184962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.185283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.185294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.185636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.185647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.185991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.186001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.186381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.186392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.186653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.186664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.186966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.186976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.187328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.187338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.187661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.187672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.187992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.188006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.188327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.188338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.188646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.188656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.188871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.188882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.189202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.189212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.189507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.189518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.189845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.189855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.190199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.190209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.190524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.190535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.190779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.190790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.191101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.191112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.191453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.191464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.577 qpair failed and we were unable to recover it. 00:39:27.577 [2024-07-22 10:55:33.191787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.577 [2024-07-22 10:55:33.191797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.192108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.192118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.192467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.192478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.192793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.192804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.193123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.193460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.193471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.193817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.193828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.194041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.194052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.194431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.194442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.194723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.194733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.195077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.195387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.195401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.195726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.195737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.196058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.196069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.196411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.196421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.196715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.196727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.197045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.197056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.197378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.197388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.197743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.197755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.198098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.198109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.198435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.198445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.198765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.198776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.199124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.199134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.199476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.199487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.199808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.199819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.200132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.200143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.200485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.200495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.200788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.200799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.201033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.201044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.201369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.201379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.201715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.201726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.202076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.202087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.202436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.202447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.202782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.202792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.203139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.203150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.203490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.203500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.203820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.203830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.204154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.204165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.204504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.204515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.204829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.204839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.205160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.578 [2024-07-22 10:55:33.205171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.578 qpair failed and we were unable to recover it. 00:39:27.578 [2024-07-22 10:55:33.205483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.205494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.205833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.205843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.206140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.206151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.206473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.206484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.206585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.206596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.206925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.206935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.207282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.207292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.207635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.207648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.207963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.207974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.208279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.208290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.208635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.208646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.208956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.208968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.209158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.209169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.209494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.209504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.209847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.209857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.210173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.210184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.210502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.210513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.210853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.210863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.211210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.211221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.211545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.211557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.211880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.211891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.212234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.212244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.212488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.212499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.212785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.212795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.213148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.213158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.213502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.213513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.213815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.213825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.214146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.214385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.214399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.214706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.214718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.215068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.215079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.215392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.215407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.215748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.215758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.216104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.216114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.216424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.216435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.216662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.216673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.216988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.216999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.217343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.217353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.217694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.217705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.217866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.217878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.579 qpair failed and we were unable to recover it. 00:39:27.579 [2024-07-22 10:55:33.218212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.579 [2024-07-22 10:55:33.218222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.218570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.218581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.218958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.218971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.219291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.219301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.219641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.219652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.219996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.220007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.220348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.220359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.220695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.220706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.221029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.221040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.221388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.221406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.221748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.221759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.222078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.222089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.222411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.222422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.222712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.222722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.223074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.223084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.223423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.223434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.223666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.223676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.224009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.224020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.224374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.224385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.224710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.224721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.225040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.225051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.225351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.225363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.225593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.225605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.225917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.225928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.226271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.226282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.226618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.226629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.226821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.226833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.227136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.227146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.227413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.227424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.227726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.228085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.228095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.228416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.228427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.228747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.228757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.229077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.229087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.229433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.229444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.229766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.229777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.230094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.230104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.230413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.230424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.230774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.230784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.231107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.231117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.580 [2024-07-22 10:55:33.231516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.580 [2024-07-22 10:55:33.231527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.580 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.231836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.231846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.232185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.232195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.232517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.232528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.232851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.232862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.233215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.233225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.233624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.233636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.233946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.233957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.234277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.234288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.234623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.234634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.234972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.234982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.235300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.235310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.235642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.235653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.236001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.236011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.236329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.236339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.236663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.236674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.236954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.236964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.237309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.237320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.237662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.237673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.237995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.238006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.238343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.238354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.238740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.238751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.239061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.239072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.239388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.239402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.239728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.239738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.240081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.240091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.240441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.240452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.240771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.240781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.241105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.241115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.241426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.241779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.241790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.242111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.242121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.242439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.242449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.242666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.242677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.242998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.243008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.243336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.243347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.581 qpair failed and we were unable to recover it. 00:39:27.581 [2024-07-22 10:55:33.243746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.581 [2024-07-22 10:55:33.243757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.244064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.244075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.244428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.244438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.244788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.244798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.245136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.245146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.245493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.245504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.245703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.245715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.246048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.246392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.246406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.246748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.246758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.582 [2024-07-22 10:55:33.247100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.582 [2024-07-22 10:55:33.247111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.582 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.247453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.247465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.247783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.247794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.248008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.248319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.248329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.248657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.248669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.248989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.249000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.249342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.249352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.249697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.249708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.250026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.250037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.250291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.250302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.250652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.250665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.251041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.251051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.251281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.251291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.251599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.251610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.251921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.251931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.252276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.252287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.252629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.252639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.252900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.252910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.253260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.253270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.253606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.253617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.253939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.253950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.254274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.254284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.254628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.254638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.254984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.254995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.255326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.255337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.255667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.854 [2024-07-22 10:55:33.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.854 qpair failed and we were unable to recover it. 00:39:27.854 [2024-07-22 10:55:33.256030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.256040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.256383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.256393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.256699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.256709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.257031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.257042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.257384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.257401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.257741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.257751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.258148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.258158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.258465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.258476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.258816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.258827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.259175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.259185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.259504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.259514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.259832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.259844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.260192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.260203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.260516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.260527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.260855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.260866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.261180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.261191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.261539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.261550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.261861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.261871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.262180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.262190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.262495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.262506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.262846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.262857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.263151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.263161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.263487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.263498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.263713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.263725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.264050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.264060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.264440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.264452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.264761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.264772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.265104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.265114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.265457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.265468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.265813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.265823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.266156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.266167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.266504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.266515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.266858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.266869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.267167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.267178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.267470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.267481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.267800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.267811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.268155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.268165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.268506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.268517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.268844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.268859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.269197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.269208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.269551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.269562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.269779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.269789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.270122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.270132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.270512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.270524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.270868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.270879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.271094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.271105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.271443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.271454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.271792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.271803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.272138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.272149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.272481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.272492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.272874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.272885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.273205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.273216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.273551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.273562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.273866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.273878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.274197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.274208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.855 [2024-07-22 10:55:33.274526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.855 [2024-07-22 10:55:33.274536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.855 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.274823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.274834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.275177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.275187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.275513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.275523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.275845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.275856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.276213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.276224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.276544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.276555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.276845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.277182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.277193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.277534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.277545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.277884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.277895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.278106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.278116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.278440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.278451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.278793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.278804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.279107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.279118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.279448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.279459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.279740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.279751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.280076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.280087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.280429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.280439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.280823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.280834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.281159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.281462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.281472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.281811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.281822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.282143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.282153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.282474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.282485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.282727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.282737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.283078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.283088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.283441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.283451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.283872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.283884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.284200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.284211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.284600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.284611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.284922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.284932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.285252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.285263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.285611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.285621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.285961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.285972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.286316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.286327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.286671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.286682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.286994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.287005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.287321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.287331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.287657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.287668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.287961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.287971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.288308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.288319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.288655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.288666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.289065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.289076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.289390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.289405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.289749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.289759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.290113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.290124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.290447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.290786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.290796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.291139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.291149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.291481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.291492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.291817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.291830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.292146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.292157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.292463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.292474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.292794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.856 [2024-07-22 10:55:33.292804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.856 qpair failed and we were unable to recover it. 00:39:27.856 [2024-07-22 10:55:33.293131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.293141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.293464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.293474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.293862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.293873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.294183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.294194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.294525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.294535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.294855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.294865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.295209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.295220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.295566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.295577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.295897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.295907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.296259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.296269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.296610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.296621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.296953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.296964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.297285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.297296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.297631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.297642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.297828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.297841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.298186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.298196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.298515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.298526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.298848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.298859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.299167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.299494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.299505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.299843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.299854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.300166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.300176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.300515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.300525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.300824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.300837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.301146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.301157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.301479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.301489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.301832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.301842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.302185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.302195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.302517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.302528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.302852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.302863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.303206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.303553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.303563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.303885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.303895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.304223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.304233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.304552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.304563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.304908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.304918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.305258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.305269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.305609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.305620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.305863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.305874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.306180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.306189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.306507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.306518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.306837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.306848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.307200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.307211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.307556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.307567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.307850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.307861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.308179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.308189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.308517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.308527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.857 qpair failed and we were unable to recover it. 00:39:27.857 [2024-07-22 10:55:33.308841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.857 [2024-07-22 10:55:33.308852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.309188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.309199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.309531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.309542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.309886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.309896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.310233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.310244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.310567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.310578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.310897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.310908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.311248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.311258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.311604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.311615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.311969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.311979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.312373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.312383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.312700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.312711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.313055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.313066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.313383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.313400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.313711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.313722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.313937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.313948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.314244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.314254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.314598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.314609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.314929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.314940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.315283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.315294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.315631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.315642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.316006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.316016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.316341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.316351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.316643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.316653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.316986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.316997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.317389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.317406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.317697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.317708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.318053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.318370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.318381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.318694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.318705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.319022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.319033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.319352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.319364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.319707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.319719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.320030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.320041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.320354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.320365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.320555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.320566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.320881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.320892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.321204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.321216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.321530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.321541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.321885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.321897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.322251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.322263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.322583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.322932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.322943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.323137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.323148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.323456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.323468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.323789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.323800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.324120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.324477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.324487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.324835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.324845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.325064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.325075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.325393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.325407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.325790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.325800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.326157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.326167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.326358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.326369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.326639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.326980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.326991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.327342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.858 [2024-07-22 10:55:33.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.858 qpair failed and we were unable to recover it. 00:39:27.858 [2024-07-22 10:55:33.327646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.327657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.327975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.327986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.328331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.328341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.328645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.328656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.328976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.328986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.329312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.329323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.329660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.329671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.329967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.329977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.330298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.330308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.330638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.330649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.330995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.331006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.331357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.331367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.331693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.331704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.332025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.332036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.332381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.332394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.332720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.332731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.333030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.333041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.333338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.333696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.333707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.334052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.334063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.334383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.334393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.334736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.334747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.335094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.335105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.335414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.335426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.335747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.335757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.336077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.336088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.336429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.336440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.336747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.336757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.337080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.337091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.337411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.337422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.337762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.337772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.338115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.338125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.338450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.338461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.338782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.338793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.339138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.339148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.339465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.339475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.339798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.339808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.340130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.340140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.340476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.340486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.340778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.340788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.341099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.341110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.341423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.341436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.341742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.341752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.342094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.342104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.342371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.342381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.342712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.342723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.343067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.343077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.343417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.343428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.343747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.343757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.344050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.344060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.344388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.344402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.344735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.344746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.344967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.344979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.345290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.345300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.345645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.345656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.345985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.345996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.859 [2024-07-22 10:55:33.346334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.859 [2024-07-22 10:55:33.346344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.859 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.346662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.346673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.347015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.347026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.347365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.347375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.347688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.347699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.348019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.348030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.348371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.348381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.348711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.348722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.349057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.349067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.349429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.349440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.349780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.349790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.350134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.350145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.350466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.350476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.350702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.350713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.351047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.351058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.351407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.351418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.351755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.351765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.352085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.352095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.352446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.352457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.352781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.352791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.353113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.353123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.353441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.353451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.353802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.353813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.354160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.354170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.354493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.354503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.354822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.355126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.355137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.355479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.355490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.355808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.355818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.356160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.356171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.356516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.356527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.356868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.356878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.357210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.357222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.357541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.357552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.357857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.357867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.358212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.358223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.358533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.358544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.358827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.358837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.359178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.359188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.359538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.359548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.359870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.359881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.360235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.360246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.360660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.360671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.360983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.360993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.361317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.361328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.361649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.361660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.362004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.362210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.362221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.362539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.362550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.860 [2024-07-22 10:55:33.362821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.860 qpair failed and we were unable to recover it. 00:39:27.860 [2024-07-22 10:55:33.363163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.363174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.363509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.363519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.363845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.363855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.364177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.364190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.364421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.364434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.364766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.364776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.365097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.365107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.365427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.365438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.365783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.365793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.366134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.366144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.366454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.366465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.366799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.366809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.367155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.367165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.367508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.367520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.367749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.367760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.368077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.368088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.368434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.368445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.368789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.368800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.369119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.369129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.369455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.369466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.369808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.369819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.370129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.370139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.370469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.370479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.370800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.370811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.371152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.371162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.371504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.371515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.371836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.371847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.372167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.372177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.372524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.372880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.372890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.373225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.373238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.373573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.373584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.373927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.373938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.374284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.374295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.374632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.374643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.374962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.374972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.375319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.375330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.375560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.375571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.375891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.375901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.376221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.376231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.376559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.376571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.376913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.376923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.377233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.377244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.377563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.377574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.377919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.377930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.378273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.378283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.378596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.378607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.378930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.378940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.379281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.379291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.379571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.379582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.379905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.379915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.380234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.380245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.380557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.380569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.380888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.380899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.381217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.381227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.381524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.381535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.381865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.861 [2024-07-22 10:55:33.381875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.861 qpair failed and we were unable to recover it. 00:39:27.861 [2024-07-22 10:55:33.382225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.382236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.382564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.382575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.382911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.382921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.383261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.383273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.383605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.383615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.383951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.383961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.384278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.384290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.384537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.384548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.384897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.384908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.385224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.385235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.385505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.385516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.385644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.385653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.386003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.386013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.386334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.386344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.386661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.386672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.386886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.386896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.387186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.387196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.387516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.387527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.387846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.387857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.388045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.388057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.388338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.388349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.388664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.388675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.388868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.388879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.389224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.389235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.389581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.389592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.389906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.389917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.390251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.390262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.390575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.390586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.390966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.390977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.391288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.391298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.391639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.391650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.391948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.391958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.392290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.392301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.392686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.392696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.393013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.393023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.393338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.393348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.393690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.393701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.394022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.394033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.394356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.394367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.394681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.394692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.395034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.395045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.395366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.395379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.395514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.395765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.395775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.395965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.395977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.396294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.396305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.396633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.396643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.396969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.396980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.397298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.397309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.397597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.397608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.397927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.398266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.398277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.398582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.398593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.398940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.398952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.399236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.399247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.399542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.399554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.862 qpair failed and we were unable to recover it. 00:39:27.862 [2024-07-22 10:55:33.399881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.862 [2024-07-22 10:55:33.399892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.400237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.400248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.400589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.400600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.400915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.400926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.401228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.401238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.401581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.401592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.401910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.401921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.402273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.402283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.402633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.402644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.402985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.402996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.403311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.403322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.403677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.403688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.404034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.404048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.404387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.404400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.404736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.404746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.405058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.405068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.405413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.405424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.405728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.405739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.406061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.406072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.406390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.406405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.406755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.406766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.407077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.407087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.407417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.407427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.407747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.407757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.408102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.408112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.408466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.408477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.408772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.408783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.409109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.409119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.409466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.409477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.409791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.409801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.410113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.410124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.410444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.410455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.410763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.410774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.410963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.410974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.411153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.411164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.411372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.411714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.411725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.412073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.412083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.412406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.412418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.412760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.412772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.413119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.413129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.413478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.413489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.413854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.413865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.414196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.414207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.414553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.414563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.414906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.414917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.415241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.415252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.415600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.415613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.415962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.415973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.416292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.416303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.416653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.416663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.417065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.417075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.417386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.417721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.863 [2024-07-22 10:55:33.417732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.863 qpair failed and we were unable to recover it. 00:39:27.863 [2024-07-22 10:55:33.418051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.418380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.418390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.418717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.418728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.419076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.419087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.419412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.419423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.419757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.419767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.420113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.420124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.420463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.420794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.420804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.421145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.421155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.421373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.421383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.421710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.421721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.421935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.421946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.422268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.422278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.422616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.422627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.422967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.422977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.423208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.423540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.423551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.423895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.423905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.424253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.424587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.424598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.424783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.424794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.425110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.425122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.425351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.425361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.425683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.425694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.426021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.426031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.426375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.426385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.426730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.426741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.426962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.426973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.427284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.427295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.427607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.427618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.427846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.427857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.428169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.428180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.428507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.428519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.428849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.428860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.429173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.429185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.429531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.429543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.429885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.429896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.430219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.430230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.430579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.430943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.430954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.431303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.431314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.431648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.431659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.431964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.431975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.432290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.432301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.432640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.432650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.432977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.433344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.433355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.433715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.433726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.434050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.434060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.434407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.434417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.434775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.434786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.435127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.435138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.864 [2024-07-22 10:55:33.435457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.864 [2024-07-22 10:55:33.435470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.864 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.435817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.435827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.436170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.436180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.436516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.436528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.436863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.436873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.437218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.437229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.437449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.437460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.437797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.437808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.438147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.438474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.438485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.438808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.438818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.439138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.439148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.439468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.439478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.439822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.439833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.440178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.440189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.440512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.440523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.440845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.440855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.441195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.441205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.441521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.441532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.441760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.441770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.442091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.442101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.442448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.442458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.442754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.442765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.443073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.443084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.443412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.443423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.443768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.443779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.444125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.444135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.444421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.444434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.444727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.444737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.445060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.445071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.445419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.445430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.445763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.445774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.446094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.446105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.446404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.446774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.446784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.447092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.447102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.447414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.447425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.447721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.447732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.448083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.448093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.448365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.448376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.448694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.448705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.449019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.449030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.449374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.449385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.449716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.449727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.450047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.450057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.450381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.450392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.450736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.450746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.451065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.451075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.451401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.451413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.451731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.451742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.452082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.452093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.452414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.452427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.452788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.452799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.453121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.453132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.453477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.453487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.453809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.865 [2024-07-22 10:55:33.453819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.865 qpair failed and we were unable to recover it. 00:39:27.865 [2024-07-22 10:55:33.454138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.454148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.454489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.454500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.454848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.454858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.455180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.455190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.455557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.455568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.455871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.455882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.456231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.456242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.456569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.456580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.456898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.456909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.457187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.457197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.457538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.457549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.457881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.457892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.458240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.458251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.458596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.458607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.458954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.459296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.459307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.459639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.459650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.459995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.460005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.460352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.460362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.460687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.460699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.461017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.461028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.461372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.461383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.461746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.461756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.462057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.462068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.462391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.462406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.462726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.462736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.463025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.463036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.463229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.463241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.463580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.463876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.463887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.464228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.464240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.464568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.464579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.464896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.464906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.465253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.465264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.465602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.465614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.465935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.465946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.466330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.466340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.466648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.466659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.466967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.466977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.467296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.467309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.467650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.467662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.467967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.467979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.468323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.468334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.468662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.468673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.468995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.469005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.469358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.469368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.469688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.469698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.470044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.470055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.470242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.470254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.470566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.470578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.470807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.470818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.471157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.471169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.471519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.471530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.471878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.471889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.472237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.472247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.472568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.472579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.866 [2024-07-22 10:55:33.472891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.866 [2024-07-22 10:55:33.472902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.866 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.473245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.473255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.473589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.473600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.473920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.473931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.474243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.474253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.474588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.474599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.474939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.474949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.475283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.475294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.475636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.475647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.476034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.476045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.476355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.476367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.476699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.476710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.477052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.477062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.477414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.477425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.477742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.477753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.478074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.478085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.478405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.478416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.478761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.478771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.479120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.479131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.479451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.479462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.479775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.479785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.480097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.480107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.480278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.480291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.480636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.480647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.481043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.481054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.481406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.481417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.481555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.481566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.481851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.481862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.482181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.482191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.482530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.482540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.482839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.482850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.483190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.483200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.483532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.483544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.483845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.483856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.484203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.484214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.484535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.484547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.484879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.484890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.485235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.485247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.485578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.485589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.485906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.485916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.486230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.486577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.486588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.486936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.486947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.487267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.487278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.487540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.487551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.487844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.487855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.488203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.488214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.488533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.488543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.488860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.488870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.489216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.867 [2024-07-22 10:55:33.489226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.867 qpair failed and we were unable to recover it. 00:39:27.867 [2024-07-22 10:55:33.489418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.489430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.489752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.489763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.490087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.490098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.490441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.490452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.490769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.490780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.491009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.491020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.491339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.491349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.491662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.491673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.492026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.492036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.492353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.492364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.492704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.492717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.492944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.492955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.493308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.493318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.493632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.493643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.493957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.493967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.494277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.494288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.494667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.494678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.494976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.494987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.495331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.495342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.495653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.495664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.496014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.496026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.496367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.496378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.496612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.496624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.496959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.497303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.497314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.497734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.497748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.498088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.498099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.498448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.498459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.498817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.498829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.499174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.499188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.499495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.499507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.499876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.499887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.500202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.500213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.500526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.500536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.500854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.500865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.501207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.501218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.501533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.501544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.501882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.501893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.502218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.502229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.502505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.502516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.502834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.502845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.503256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.503267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.503581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.503592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.503901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.503912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.504253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.504264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.504610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.504621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.504948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.504958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.505254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.505584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.505595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.505926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.505937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.506290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.506628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.506638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.506956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.506967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.507306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.507316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.507682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.507692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.508007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.508020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.508369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.868 [2024-07-22 10:55:33.508380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.868 qpair failed and we were unable to recover it. 00:39:27.868 [2024-07-22 10:55:33.508719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.508732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.509070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.509422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.509433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.509749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.509760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.510107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.510118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.510469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.510481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.510825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.510836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.511144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.511156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.511519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.511529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.511843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.511853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.512199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.512210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.512549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.512561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.512770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.512781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.513102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.513113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.513457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.513468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.513792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.513803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.514120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.514130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.514443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.514454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.514807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.514817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.515154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.515164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.515491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.515503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.515734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.515746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.516079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.516089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.516272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.516590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.516601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.516940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.516953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.517295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.517305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.517623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.517635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.517948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.517959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.518280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.518291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.518625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.518988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.518999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.519323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.519335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.519701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.519712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.520044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.520054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.520402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.520412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.520729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.520739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.520931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.520942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.521246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.521258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.521597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.521608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.521845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.521856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.522182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.522193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.522499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.522511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.522833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.522844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.523163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.523174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.523519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.523529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.523873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.523884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.524226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.524236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.524559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.524570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.524898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.524910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.525269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.525280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.525626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.525637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.525964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.525974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.526303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.526314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.526637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.526648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.526946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.526957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.869 qpair failed and we were unable to recover it. 00:39:27.869 [2024-07-22 10:55:33.527282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.869 [2024-07-22 10:55:33.527295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.527604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.527616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.527929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.527940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.528287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.528639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.528650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.528973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.528983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.529325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.529335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.529710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.529721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.530064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.530076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.530401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.530412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.530776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.530788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.531129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.531141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.531454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.531465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.531821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.870 [2024-07-22 10:55:33.531832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.870 qpair failed and we were unable to recover it. 00:39:27.870 [2024-07-22 10:55:33.532022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.532033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.532338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.532348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.532734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.532745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.533063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.533074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.533421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.533432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.533738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.534068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.534081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.534405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.534416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.534744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.534756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.534944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.534956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.535296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.535307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.535649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.535660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.535954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.536309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.536319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.536672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.536683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.537013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.537023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.537370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.537381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.537619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.537630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.537951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.537962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.538187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.538199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.538519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.538530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.538847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.538857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.539172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.539183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.539535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.539879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.539889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.540082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.540094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.540456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.540467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.540780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.540791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.541113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.541123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.541435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.541447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.541659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.541672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:27.871 [2024-07-22 10:55:33.542010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.871 [2024-07-22 10:55:33.542020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:27.871 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.542372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.542384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.542785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.542796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.543140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.543150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.543475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.543486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.543803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.543814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.544036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.544047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.544400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.544411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.544719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.544731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.545078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-07-22 10:55:33.545089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-07-22 10:55:33.545431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.545443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.545749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.545759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.546081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.546092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.546443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.546454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.546789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.546799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.547146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.547157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.547478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.547489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.547844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.547855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.548202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.548213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.548546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.548873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.548884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.549186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.549198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.549539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.549550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.549900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.549910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.550232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.550243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.550630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.550642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.550940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.550951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.551295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.551306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.551648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.551660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.551998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.552008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.552302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.552313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.552618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.552629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.552955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.552965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.553311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.553322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.553656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.553667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.554016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.554027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.554350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.554361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.554682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.554692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.555020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.555031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.555375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.555387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.555710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.555721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.556065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.556077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.556403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.556414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.556720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.556732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.557064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.557075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.557369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.557380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.557675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.557688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.558030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.558041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.558344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.558354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.558684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.558695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-07-22 10:55:33.559025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-07-22 10:55:33.559035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.559397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.559408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.559579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.559591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.559931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.559942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.560256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.560267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.560582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.560593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.560943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.560953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.561184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.561194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.561531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.561542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.561894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.561905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.562235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.562246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.562588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.562599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.562939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.562951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.563267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.563278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.563615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.563626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.563962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.563972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.564199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.564210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.564434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.564445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.564652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.564664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.564993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.565004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.565339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.565349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.565648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.565659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.565855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.565867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.566186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.566198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.566544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.566555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.566877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.566887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.567234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.567245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.567585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.567597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.567919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.567929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.568250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.568260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.568619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.568631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.568955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.568966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.569280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.569291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.569654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.569665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.569984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.569995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.570352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.570362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.570684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.570695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.571016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.571028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.571377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.571387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.571692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.571703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.572027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.572039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-07-22 10:55:33.572382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-07-22 10:55:33.572393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.572801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.572812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.573102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.573114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.573408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.573420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.573803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.573813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.574035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.574045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.574363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.574373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.574685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.574696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.575049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.575059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.575414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.575424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.575722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.575733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.576087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.576097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.576393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.576408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.576713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.576725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.577063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.577075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.577373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.577383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.577708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.577718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.578034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.578045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.578253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.578263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.578644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.578655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.578957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.578967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.579320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.579331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.579647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.579657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.579972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.579985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.580307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.580318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.580619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.580946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.580957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.581272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.581283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.581570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.581582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.581930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.581941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.582262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.582273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.582624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.582635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.582951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.582962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.583304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.583315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.583551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.583561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.583905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.583916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.584243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.584255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.584593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.584604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.584923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.584934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.585245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.585256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.585447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.585458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-07-22 10:55:33.585750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-07-22 10:55:33.585761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.586073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.586083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.586407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.586418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.586728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.586738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.587089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.587099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.587419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.587431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.587762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.587772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.588112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.588123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.588473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.588484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.588829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.588842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.589160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.589171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.589492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.589503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.589844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.589855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.590202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.590213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.590534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.590546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.590862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.590873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.591213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.591224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.591565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.591577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.591910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.591921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.592261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.592271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.592631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.592642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.592962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.592973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.593186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.593196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.593527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.593538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.593883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.593894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.594213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.594224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.594419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.594430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.594767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.594778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.595122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.595133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.595459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.595469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.595785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.595796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.596090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.596101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.596416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.596426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.596738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.596748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.596962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.596973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.597277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.597629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.597641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.597975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.597986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.598305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.598315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.598700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-07-22 10:55:33.598711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-07-22 10:55:33.599057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.599068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.599387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.599402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.599793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.599803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.600144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.600154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.600477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.600488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.600832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.600844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.601201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.601212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.601552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.601564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.601881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.601892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.602214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.602225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.602562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.602574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.602893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.602903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.603250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.603260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.603601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.603612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.603932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.603943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.604285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.604296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.604626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.604636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.604975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.604985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.605380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.605391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.605695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.605706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.606047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.606058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.606353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.606364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.606689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.606699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.607040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.607051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.607403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.607414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.607756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.607766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.608085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.608096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.608437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.608448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.608759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.608770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.608987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.608996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.609333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.609344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.609692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.609702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.610051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.610063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.610310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.610321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.610668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.610680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.611026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.611036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-07-22 10:55:33.611383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-07-22 10:55:33.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.611749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.611761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.612080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.612091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.612433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.612444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.612759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.612770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.613086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.613098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.613418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.613430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.613745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.613756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.614098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.614108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.614428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.614439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.614824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.614835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.615130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.615141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.615487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.615498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.615825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.615835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.616161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.616172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.616487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.616499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.616800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.616812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.617164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.617174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.617508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.617847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.617857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.618208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.618218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.618539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.618549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.618870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.618881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.619228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.619239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.619464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.619475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.619848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.619860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.620180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.620191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.620530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.620541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.620648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.620660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.620965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.620975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.621297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.621308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.621545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.621557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.621876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.621886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.622238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.622250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.622526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.622537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.622829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.622839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.623185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.623196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.623529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.623542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.623827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.623838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.624175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.624186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.624529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.624540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-07-22 10:55:33.624734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-07-22 10:55:33.624745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.625088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.625099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.625422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.625762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.625773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.626120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.626132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.626452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.626463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.626782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.626793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.627137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.627148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.627491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.627501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.627853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.627865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.628189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.628199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.628436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.628447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.628763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.628774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.629097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.629107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.629445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.629458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.629786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.629797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.630143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.630155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.630473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.630485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.630817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.630827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.631213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.631224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.631412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.631423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.631764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.631775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.632093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.632103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.632330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.632341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.632735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.632745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.633055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.633067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.633388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.633409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.633728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.633740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.634080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.634091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.634319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.634329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.634554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.634565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.634892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.634903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.635244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.635578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.635589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.635912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.635922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.636112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.636123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.636468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.636478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.636820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.636831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.637154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.637165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.637475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.637487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.637779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.637789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-07-22 10:55:33.638091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-07-22 10:55:33.638101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.638260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.638272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.638555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.638566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.638869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.638880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.639200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.639211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.639535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.639547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.639886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.639896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.640240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.640251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.640571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.640582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.640944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.640955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.641275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.641597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.641607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.641936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.642281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.642291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.642663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.642676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.643019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.643029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.643333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.643345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.643688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.643699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.644049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.644059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.644405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.644416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.644729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.644739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.645048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.645059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.645376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.645386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.645726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.645737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.646055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.646065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.646387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.646403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.646750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.646761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.646948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.646960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.647341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.647353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.647661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.647672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.648012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.648023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.648215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.648227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.648570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.648581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.648896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.648908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.649228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.649238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.649581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.649592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.649915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.649927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.650250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.650262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.650588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.650599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.650896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.650907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.651272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.651282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-07-22 10:55:33.651609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-07-22 10:55:33.651622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.651951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.651962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.652300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.652311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.652524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.652536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.652724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.652736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.653059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.653069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.653412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.653423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.653723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.653733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.654060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.654414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.654425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.654736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.654747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.655064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.655075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.655437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.655448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.655788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.655799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.656177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.656188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.656501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.656511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.656692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.656703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.657025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.657036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.657386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.657405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.657713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.657723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.657991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.658001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.658348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.658358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.658672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.658682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.658995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.659006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.659319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.659330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.659640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.659652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.659995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.660006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.660222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.660234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.660628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.660638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.660960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.660971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.661313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.661324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.661685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.661696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.662015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.662026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.662367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.662378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-07-22 10:55:33.662719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-07-22 10:55:33.662730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.663052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.663063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.663382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.663393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.663755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.663767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.664112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.664123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.664437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.664449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.664856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.664867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.665083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.665093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.665438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.665448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.665735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.665746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.666088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.666101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.666440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.666451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.666646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.666657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.666960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.666970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.667288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.667299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.667631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.667642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.667991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.668002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.668325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.668336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.668706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.668717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.669029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.669040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.669380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.669391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.669715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.669727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.670047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.670059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.670249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.670260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.670560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.670572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.670912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.670922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.671245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.671255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.671536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.671547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.671891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.671902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.672196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.672207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.672499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.672511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.672814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.672825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.673108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.673120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.673440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.673450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.673778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.673789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.674112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.674122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.674464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.674475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.674822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.674833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.675148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.675158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.675470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.675481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.675772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.675783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-07-22 10:55:33.676115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-07-22 10:55:33.676126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.676446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.676457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.676769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.677121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.677132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.677455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.677466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.677819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.677829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.678180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.678191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.678535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.678547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.678774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.678785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.678998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.679008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.679194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.679205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.679560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.679571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.679844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.679854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.680175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.680185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.680379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.680390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.680687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.680699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.681018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.681029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.681362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.681373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.681719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.681731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.682080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.682090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.682413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.682426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.682772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.682783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.682972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.683314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.683326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.683646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.683658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.683979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.683990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.684332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.684342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.684703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.684713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.685033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.685044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.685376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.685387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.685705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.685716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.686063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.686074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.686388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.686407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.686751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.686762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.687109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.687120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.687462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.687474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.687815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.687825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.688143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.688154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.688401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.688412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.688709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.688719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.689045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.689056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-07-22 10:55:33.689374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-07-22 10:55:33.689385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.689710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.689721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.690055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.690067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.690379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.690389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.690744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.690754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.691100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.691110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.691463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.691476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.691842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.691853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.692165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.692176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.692588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.692599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.692910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.692921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.693240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.693251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.693566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.693577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.693925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.693936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.694287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.694298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.694521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.694532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.694897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.694907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.695248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.695259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.695594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.695606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.695925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.695936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.696298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.696309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.696636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.696647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.697000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.697011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.697358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.697369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.697687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.697698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.698043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.698055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.698401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.698413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.698727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.698738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.699086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.699097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.699448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.699459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.699780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.699791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.700114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.700124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.700440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.700451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.700764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.700777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.701120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.701131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.701319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.701332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2264985 Killed "${NVMF_APP[@]}" "$@" 00:39:28.152 [2024-07-22 10:55:33.701613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.701625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.701913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.701924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 [2024-07-22 10:55:33.702265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.702277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:28.152 [2024-07-22 10:55:33.702615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.702627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.152 qpair failed and we were unable to recover it. 00:39:28.152 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:28.152 [2024-07-22 10:55:33.702950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.152 [2024-07-22 10:55:33.702962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:28.153 [2024-07-22 10:55:33.703310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.703322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.153 [2024-07-22 10:55:33.703639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.703652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.703986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.703998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.704315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.704327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.704665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.704677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.705022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.705033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.705225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.705236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.705589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.705600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.705949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.705960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.706313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.706323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.706651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.706662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.706983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.706995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.707335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.707345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.707668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.707681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.708006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.708018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.708989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.709012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.709349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.709361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.709702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.709714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.710017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.710029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.710365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.710378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2265835 00:39:28.153 [2024-07-22 10:55:33.710731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.710745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2265835 00:39:28.153 [2024-07-22 10:55:33.711062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.711075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2265835 ']' 00:39:28.153 [2024-07-22 10:55:33.711402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.711415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:28.153 [2024-07-22 10:55:33.711719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.711732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.153 [2024-07-22 10:55:33.712036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.712049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:28.153 10:55:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.153 [2024-07-22 10:55:33.712401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.712415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.712761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.712773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.713096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.713108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.713456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.153 [2024-07-22 10:55:33.713469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.153 qpair failed and we were unable to recover it. 00:39:28.153 [2024-07-22 10:55:33.714344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.714369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.714677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.714692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.715617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.715639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.715980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.715993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.716423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.716772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.716787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.717116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.717130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.717808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.717819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.718090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.718101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.718343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.718355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.718635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.718647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.718957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.718968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.719308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.719321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.719529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.719540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.719859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.719871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.720201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.720212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.720563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.720573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.720914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.720925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.721172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.721182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.721480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.721491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.721841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.721852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.722077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.722394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.722409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.722710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.722724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.723069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.723081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.723412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.723762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.723773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.724116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.724128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.724452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.724465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.724859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.724870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.725214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.725225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.725586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.725597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.725826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.725837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.726150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.726161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.726481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.726492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.726847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.726858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.727152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.727163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.727357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.727370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.727708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.727719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.728071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.154 [2024-07-22 10:55:33.728083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.154 qpair failed and we were unable to recover it. 00:39:28.154 [2024-07-22 10:55:33.728402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.728414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.728713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.728724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.729054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.729065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.729386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.729401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.729629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.729639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.729975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.729986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.730327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.730339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.730576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.730589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.730891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.730902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.731226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.731237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.731591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.731604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.731916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.731926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.732270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.732281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.732630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.732641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.732919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.732930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.733045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.733055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.733548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.733559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.733903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.733914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.734238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.734249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.734570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.734581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.734908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.734919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.735239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.735250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.735574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.735585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.735806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.736165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.736177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.736495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.736506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.736811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.736822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.737041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.737052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.737384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.737409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.737804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.737815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.738146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.738157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.738422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.738433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.738760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.739071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.739082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.739436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.739447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.739767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.739779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.740043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.740053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.740400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.740420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.740659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.740670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.741011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.741022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.741254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.741265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.155 qpair failed and we were unable to recover it. 00:39:28.155 [2024-07-22 10:55:33.741581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.155 [2024-07-22 10:55:33.741592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.741947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.741957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.742275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.742287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.742639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.742965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.742976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.743324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.743334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.743664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.743675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.743991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.744003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.744324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.744335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.744550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.744562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.744881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.744893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.745230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.745242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.745560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.745571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.745883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.745894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.746230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.746240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.746593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.746604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.746949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.746960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.747313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.747324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.747636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.747646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.747968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.747979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.748306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.748317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.748645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.748655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.748964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.748975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.749297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.749307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.749620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.749631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.749902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.749913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.750298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.750309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.750635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.750646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.750982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.750994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.751418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.751429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.751730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.751741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.752057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.752068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.752380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.752391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.752722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.752732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.753040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.753050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.753369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.753379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.753714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.753725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.754032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.754045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.754305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.754316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.754635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.754646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.754834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.754847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.156 [2024-07-22 10:55:33.755154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.156 [2024-07-22 10:55:33.755164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.156 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.755508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.755519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.755836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.755847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.756163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.756175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.756508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.756521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.756880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.756891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.757200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.757211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.757557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.757569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.757922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.757933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.758258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.758269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.758610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.758622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.758943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.758954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.759254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.759264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.759565] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:39:28.157 [2024-07-22 10:55:33.759590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.759603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.759613] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.157 [2024-07-22 10:55:33.759928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.759941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.760286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.760295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.760501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.760513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.760846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.760857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.761200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.761211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.761534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.761545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.761893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.761905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.762265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.762277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.762524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.762536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.762846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.762857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.763196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.763208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.763525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.763538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.763870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.763881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.764178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.764189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.764461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.764472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.764814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.764825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.765058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.765070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.765349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.765361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.765679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.765691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.766034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.766046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.766386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.766403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.766634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.766645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.766969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.157 [2024-07-22 10:55:33.766982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.157 qpair failed and we were unable to recover it. 00:39:28.157 [2024-07-22 10:55:33.767301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.767313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.767653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.767665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.767981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.767993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.768136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.768147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.768319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.768332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.768658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.768670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.769019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.769030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.769347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.769359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.769573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.769585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.769881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.769893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.770231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.770242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.770469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.770482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.770880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.771224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.771236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.771564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.771576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.771873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.771885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.772206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.772217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.772637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.772650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.772872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.772885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.773240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.773253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.773604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.773616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.773910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.773921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.774246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.774257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.774540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.774552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.774893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.774905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.775227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.775239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.775466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.775479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.775777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.775788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.776108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.776119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.776476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.776487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.776851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.776862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.777058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.777069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.777412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.777423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.777785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.777796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.778108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.778119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.778347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.778358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.778683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.778694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.779022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.779032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.779375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.158 [2024-07-22 10:55:33.779387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.158 qpair failed and we were unable to recover it. 00:39:28.158 [2024-07-22 10:55:33.779700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.779716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.780011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.780023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.780212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.780222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.780536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.780547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.780869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.780880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.781228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.781239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.781576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.781587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.781912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.781924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.782250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.782261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.782583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.782594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.782904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.782914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.783337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.783348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.783693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.783704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.784039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.784050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.784278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.784290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.784483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.784901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.784911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.785207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.785218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.785540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.785882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.785893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.786217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.786227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.786558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.786570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.786786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.786797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.787101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.787113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.787332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.787343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.787683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.787695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.787995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.788005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.788318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.788331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.788658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.788670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.789032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.789042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.789401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.789412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.789712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.789723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.789944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.789955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.790189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.790199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.790534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.790545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.790888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.790899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.791118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.791129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.791466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.791477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.791695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.791707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.792044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.792055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.159 qpair failed and we were unable to recover it. 00:39:28.159 [2024-07-22 10:55:33.792377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.159 [2024-07-22 10:55:33.792387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.792672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.792684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.793070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.793081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.793409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.793420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.793702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.794052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.794062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.794406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.794417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.794681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.795024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 EAL: No free 2048 kB hugepages reported on node 1 00:39:28.160 [2024-07-22 10:55:33.795034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.795404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.795415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.795717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.795728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.795923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.795935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.796291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.796302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.796634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.796645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.796983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.796996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.797313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.797323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.797618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.797630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.797971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.797981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.798280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.798291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.798489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.798500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.798842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.798853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.799081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.799091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.799416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.799427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.799808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.799818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.800114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.800125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.800471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.800482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.800818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.800828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.801153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.801165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.801438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.801449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.801773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.801785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.802133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.802143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.802474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.802486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.802776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.802786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.803116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.803126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.803468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.803479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.803797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.803808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.804152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.804164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.804518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.804530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.804837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.804848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.805169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.805180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.805414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.805425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.160 qpair failed and we were unable to recover it. 00:39:28.160 [2024-07-22 10:55:33.805617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.160 [2024-07-22 10:55:33.805628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.805957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.805970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.806293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.806305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.806632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.806643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.806987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.806999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.807321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.807332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.807699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.807710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.807765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.807775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.808085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.808096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.808445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.808456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.808786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.808797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.809143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.809153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.809492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.809504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.809877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.809888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.810201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.810515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.810526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.810869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.810880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.811176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.811187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.811521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.811532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.811884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.811896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.812258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.812270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.812503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.812515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.812857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.812870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.813202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.813213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.813556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.813568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.813887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.813898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.814118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.814128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.814513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.814524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.814843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.814854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.815144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.815156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.815366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.815378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.815709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.815721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.816067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.816077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.816380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.816391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.816719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.816729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.816926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.816938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.817276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.817288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.817618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.817629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.817957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.817967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.818310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.818704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.161 [2024-07-22 10:55:33.818715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.161 qpair failed and we were unable to recover it. 00:39:28.161 [2024-07-22 10:55:33.819062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.819075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.819401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.819412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.819722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.819733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.820078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.820089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.820431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.820443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.820783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.820794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.821017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.821028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.821340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.821352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.821657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.821669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.821990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.822001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.822337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.822348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.822679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.822690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.823040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.823051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.823404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.823416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.823619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.823927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.823939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.824284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.824294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.824609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.824620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.824943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.824954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.825302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.825314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.825547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.825558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.825909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.825921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.826236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.826248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.826555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.826566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.826918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.826928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.827268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.827279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.827655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.827666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.827987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.827999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.828324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.828334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.828637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.828649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.828961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.828972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.829165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.829177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.829511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.829523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.162 [2024-07-22 10:55:33.829862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.162 [2024-07-22 10:55:33.829873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.162 qpair failed and we were unable to recover it. 00:39:28.432 [2024-07-22 10:55:33.830190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-07-22 10:55:33.830202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-07-22 10:55:33.830536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-07-22 10:55:33.830547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-07-22 10:55:33.830843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-07-22 10:55:33.830854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.831180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.831191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.831489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.831501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.831794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.831804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.832111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.832123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.832325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.832336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.832686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.833031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.833043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.833384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.833401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.833739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.833750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.834076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.834087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.834439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.834451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.834780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.834791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.835184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.835196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.835509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.835520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.835875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.835886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.836231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.836242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.836383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.836399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.836743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.836758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.837098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.837110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.837351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.837362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.837689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.837701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.838026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.838037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.838384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.838411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.838663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.838675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.838995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.839006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.839336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.839347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.839711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.839723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.840070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.840081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.840409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.840421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.840706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.840716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.841064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.841076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.841422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.841434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.841782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.841793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.842121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.842133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.842476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.842487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.843298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.843318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.843657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.843669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.843990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.844002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.844298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.844309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.844621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-07-22 10:55:33.844633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-07-22 10:55:33.844954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.844966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.845157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.845168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.845468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.845479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.845793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.845804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.846118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.846129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.846455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.846467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.847292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.847312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.847535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.847547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.847871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.847882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.848188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.848200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.848543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.848554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.848864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.848875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.848896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:28.434 [2024-07-22 10:55:33.849169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.849180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.849516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.849528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.849844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.849856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.850165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.850176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.850367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.850378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.850694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.850705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.851052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.851063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.851415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.851427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.851613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.851625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.851904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.851915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.852261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.852272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.852604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.852940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.852951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.853295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.853307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.853630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.853641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.853851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.853862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.854164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.854176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.854374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.854385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.854707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.854719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.855056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.855070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.855401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.855413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.855774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.855785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.856023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.856033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.856348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.856359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.856680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.856692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.857014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.857025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.857372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.857383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.857762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-07-22 10:55:33.857774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-07-22 10:55:33.858090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.858102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.858424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.858436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.858776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.858787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.859130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.859142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.859469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.859482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.859689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.859701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.860037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.860048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.860375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.860387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.860709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.860722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.860934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.860946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.861291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.861302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.861635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.861646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.861961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.861972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.862161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.862173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.862481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.862492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.862846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.862856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.863170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.863181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.863526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.863537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.863897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.863909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.864249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.864262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.864583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.864595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.864884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.864896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.865086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.865096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.865420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.865667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.865677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.865983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.865995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.866312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.866325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.866654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.866665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.867016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.867028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.867375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.867386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.867742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.867753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.867951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.867962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.868281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.868292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.868522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.868535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.868877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.868888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.869209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.869221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.869568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.869581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.869925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.869936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.870258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.870269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.870440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.870452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.870748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-07-22 10:55:33.870760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-07-22 10:55:33.871106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.871118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.871389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.871405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.871597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.871607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.871925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.871936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.872283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.872294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.872486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.872500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.872798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.872809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.873158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.873169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.873357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.873369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.873682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.873694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.874032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.874043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.874422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.874434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.874811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.874822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.875143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.875154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.875479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.875491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.875813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.875824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.876170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.876182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.876534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.876546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.877538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.877569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.877930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.877942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.878294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.878305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.879044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.879065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.879392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.879410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.880210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.880230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.880568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.880580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.880817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.436 [2024-07-22 10:55:33.880849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.436 [2024-07-22 10:55:33.880856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.436 [2024-07-22 10:55:33.880862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.436 [2024-07-22 10:55:33.880868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.436 [2024-07-22 10:55:33.880921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.880931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.881004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:28.436 [2024-07-22 10:55:33.881239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.881137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:28.436 [2024-07-22 10:55:33.881249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.881268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:28.436 [2024-07-22 10:55:33.881269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:28.436 [2024-07-22 10:55:33.881491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.881706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.881716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.882050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.882416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.882429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.882659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.882672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.883021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.883032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.883260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.883270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.883579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.883590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.883897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.883908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.884095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.884105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-07-22 10:55:33.884456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-07-22 10:55:33.884468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.884791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.884803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.885156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.885168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.885511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.885523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.885872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.885883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.886209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.886219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.886514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.886526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.886864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.886874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.887199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.887210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.887531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.887789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.887800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.888128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.888140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.888485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-07-22 10:55:33.888496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-07-22 10:55:33.888860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.888871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.889099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.889111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.889426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.889437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.889651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.889663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.889981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.889991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.890342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.890353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.890675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.890688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.890878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.890889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.891225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.891237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.891450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.891462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.891821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.891834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.892157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.892167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.892384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.892400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.892698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.892709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.893042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.893052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.893376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.893388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.893738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.893749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.893908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.893919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.894249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.894260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.894607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.894620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.894814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.894825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.895031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.895042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.895349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.895360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.895689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.895700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.895904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.895915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.896252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.896265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.896468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.896482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.896774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.896786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.897102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.897113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.897473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.897485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.897554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.897844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.897855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.898079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.898090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.898424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.898436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.898771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.898782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.899104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.899116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.899444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.899456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.899750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.899761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.900060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.900072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-07-22 10:55:33.900419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-07-22 10:55:33.900432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.900757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.900767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.901116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.901127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.901487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.901846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.901858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.902175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.902186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.902526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.902537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.902737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.902751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.903068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.903079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.903412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.903425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.903759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.903770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.903964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.903975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.904317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.904328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.904558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.904878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.904889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.905233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.905570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.905582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.905794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.905805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.906110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.906121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.906314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.906326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.906676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.906687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.907010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.907021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.907336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.907347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.907699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.907711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.908034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.908045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.908356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.908368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.908573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.908584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.908884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.908895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.909217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.909228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.909549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.909560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.909756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.909767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.910075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.910087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.910403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.910414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.910757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.910767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.911120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.911453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.911465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.911648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.911659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.911951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.911961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.912311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.912322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.912654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.912665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.913040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-07-22 10:55:33.913051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-07-22 10:55:33.913381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.913392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.913728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.913739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.914083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.914094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.914418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.914429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.914750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.914761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.915113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.915124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.915472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.915484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.915791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.915803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.916134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.916145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.916496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.916508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.916856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.916867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.917053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.917064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.917356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.917367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.917715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.917727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.918074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.918086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.918412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.918423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.918741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.918752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.918946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.918957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.919151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.919161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.919478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.919489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.919819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.919831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.920183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.920195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.920369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.920380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.920563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.920575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.920923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.920934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.921279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.921290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.921513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.921524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.921750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.921760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.922083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.922093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.922446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.922457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.922811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.922821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.923173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.923185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.923495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.923507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.923810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.923823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.924174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.924188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.924515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.924526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.924851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.924863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.925209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.925498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.925508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.925842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.925853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-07-22 10:55:33.926179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-07-22 10:55:33.926190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.926520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.926530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.926595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.926604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.926667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.926677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.926777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.926788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.927119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.927129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.927537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.927548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.927723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.927734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.928063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.928074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.928432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.928444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.928735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.928745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.928960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.928971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.929298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.929309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.929506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.929518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.929724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.929734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.929955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.929966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.930282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.930293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.930613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.930626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.930960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.930972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.931168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.931180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.931529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.931539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.931867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.931880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.932063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.932074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.932378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.932389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.932744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.932755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.933062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.933072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.933401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.933413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.933756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.933766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.934126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.934139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.934327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.934338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.934645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.934656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.934969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.934980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.935345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.935356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.935708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.935719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.936049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.936060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.936376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.936387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.936566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.936577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.936800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.936811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.937150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.937160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.937383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.937393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.937724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.937735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-07-22 10:55:33.938063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-07-22 10:55:33.938074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.938406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.938417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.938738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.938945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.938956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.939106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.939118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.939456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.939467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.939794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.939804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.940226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.940241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.940550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.940561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.940843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.940854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.941049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.941059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.941415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.941427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.941754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.941765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.941962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.941973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.942174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.942185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.942503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.942514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.942858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.942870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.943203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.943215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.943587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.943598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.943953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.943964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.944298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.944310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.944622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.944634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.944993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.945004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.945359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.945370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.945685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.945697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.946020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.946030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.946217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.946227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.946550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.946561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.946909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.946920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.947226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.947237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.947563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.947575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-07-22 10:55:33.947772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-07-22 10:55:33.947783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.948060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.948071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.948371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.948383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.948579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.948809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.948820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.949102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.949112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.949305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.949317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.949617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.949628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.949974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.949985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.950335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.950346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.950438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.950448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.950755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.950766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.951127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.951138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.951453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.951464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.951788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.951800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.952148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.952159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.952344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.952355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.952737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.952748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.952937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.952948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.953318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.953329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.953670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.953681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.954007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.954019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.954344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.954356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.954739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.954750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.954948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.954959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.955340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.955351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.955659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.955670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.956025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.956036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.956348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.956359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.956678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.956689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.957015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.957027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.957376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.957387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.957724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.957735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.958025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.958036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.958325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.958335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.958392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.958623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.958635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.958991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.959002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.959174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.959236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.959247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.959547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-07-22 10:55:33.959559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-07-22 10:55:33.959701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.959711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.960017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.960028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.960313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.960324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.960636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.960649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.960996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.961007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.961372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.961383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.961712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.961723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.962048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.962060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.962412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.962423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.962754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.962764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.963161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.963172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.963362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.963375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.963694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.963706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.964052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.964063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.964386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.964710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.964721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.965071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.965082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.965431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.965443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.965768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.965778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.965974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.965984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.966314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.966326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.966666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.966677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.967000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.967010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.967334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.967345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.967653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.967664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.968019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.968030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.968218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.968229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.968436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.968447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.968820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.968831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.969180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.969191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.969255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.969267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.969364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.969375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.969537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.969548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.969832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.969842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.970190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.970202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.970562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.970573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.970903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.970913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.971216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.971227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.971434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.971445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.971623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.971633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.971942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-07-22 10:55:33.971953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-07-22 10:55:33.972166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.972178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.972406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.972417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.972740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.972751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.973074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.973085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.973434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.973445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.973761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.973964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.973976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.974259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.974270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.974457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.974469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.974802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.974812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.975216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.975226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.975412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.975423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.975735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.975746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.975936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.975947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.976135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.976146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.976332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.976343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.976651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.976662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.977010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.977021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.977327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.977338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.977503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.977514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.977824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.977834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.978024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.978035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.978362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.978372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.978663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.978674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.978871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.978882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.979182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.979193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.979525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.979537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.979862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.979873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.980201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.980211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.980533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.980544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.980735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.980746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.981099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.981111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.981457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.981468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.981786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.981797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.982144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.982155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.982477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.982488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.982809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.982820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.983168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.983179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.983376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.983387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.983696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.983707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-07-22 10:55:33.984077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-07-22 10:55:33.984088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.984435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.984446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.984770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.984781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.985134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.985145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.985481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.985668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.985678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.986007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.986017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.986346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.986356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.986558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.986569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.986879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.986889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.987225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.987236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.987287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.987297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.987647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.987658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.987987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.987998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.988351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.988362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.988753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.988763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.988948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.988958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.989184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.989197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.989520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.989729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.989739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.989919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.989929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.990278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.990289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.990617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.990629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.990981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.990991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.991392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.991408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.991704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.991714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.992066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.992077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.992430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.992440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.992788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.992799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.993124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.993134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.993324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.993334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.993686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.993697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.994027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.994037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.994389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.994404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.994576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.994588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.994918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-07-22 10:55:33.994928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-07-22 10:55:33.995298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.995309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.995472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.995484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.995582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.995592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.995934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.995944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.995999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.996009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.996303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.996313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.996668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.996679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.996985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.996997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.997362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.997375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.997727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.997738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.997933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.997944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.998273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.998284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.998604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.998615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.998960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.998971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.999162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.999172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.999364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.999375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.999733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.999744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:33.999978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:33.999989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.000180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.000191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.000500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.000511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.000828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.000838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.001141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.001152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.001344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.001355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.001695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.001706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.001898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.001909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.002213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.002223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.002569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.002580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.002923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.002935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.003294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.003304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.003623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.003634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.003819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.003830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.004115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.004127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.004315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.004326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.004628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.004639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.005006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.005017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.005375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.005388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.005728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.005738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.006049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.006061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.006259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.006271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.006616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.006628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.006962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-07-22 10:55:34.006974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-07-22 10:55:34.007162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.007172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.007505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.007516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.007861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.007872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.008214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.008225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.008413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.008425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.008724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.008735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.009060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.009071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.009433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.009444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.009823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.009834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.010018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.010028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.010375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.010386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.010582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.010594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.010639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.010650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.010843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.010854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.011226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.011237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.011580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.011591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.011781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.011978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.011990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.012292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.012303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.012500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.012511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.012687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.012698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.012980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.012990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.013358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.013369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.013689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.013700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.013891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.013901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.013945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.013954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.014222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.014232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.014560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.014571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.014928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.014939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.015133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.015144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.015429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.015439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.015782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.015793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.015979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.015989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.016340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.016351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.016555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.016565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.016772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.016784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.016979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.016990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.017299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.017310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.017640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.017651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.017977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-07-22 10:55:34.017987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-07-22 10:55:34.018345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.018356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.018710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.018722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.019063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.019074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.019258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.019270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.019435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.019446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.019796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.019806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.020131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.020141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.020468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.020479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.020844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.021201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.021211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.021537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.021548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.021608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.021617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.021910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.021921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.022265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.022276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.022623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.022635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.022832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.022843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.023183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.023194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.023534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.023545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.023740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.023751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.024045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.024056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.024393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.024415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.024746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.024955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.024967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.025279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.025290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.025622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.025634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.025947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.025958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.026267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.026279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.026604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.026615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.026941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.026951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.027321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.027331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.027673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.027684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.027912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.027923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.028224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.028235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.028430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.028441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.028756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.028767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.029091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.029429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.029441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.029780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.029790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.030145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.030156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.030346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.030356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-07-22 10:55:34.030540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-07-22 10:55:34.030551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.030865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.030876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.031227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.031238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.031588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.031599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.031794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.031805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.032130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.032141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.032491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.032502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.032882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.033209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.033220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.033579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.033949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.033960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.034282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.034294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.034466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.034477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.034772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.034783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.035135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.035147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.035479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.035820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.035831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.036171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.036182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.036371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.036382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.036576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.036587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.036967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.036978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.037332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.037344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.037539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.037550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.037940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.037952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.038139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.038150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.038340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.038351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.038409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.038420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.038749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.038761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.039084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.039095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.039437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.039458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.039510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.039519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.039832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.039844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.040191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.040204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.040569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.040580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.040999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.041010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.450 qpair failed and we were unable to recover it. 00:39:28.450 [2024-07-22 10:55:34.041238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.450 [2024-07-22 10:55:34.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.041590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.041601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.041938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.041948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.042139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.042150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.042346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.042357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.042693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.042705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.043025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.043036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.043361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.043372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.043698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.043709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.044022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.044034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.044223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.044235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.044442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.044454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.044772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.044785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.045131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.045142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.045468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.045480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.045795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.045806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.046165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.046177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.046369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.046380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.046466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.046475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.046717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.046729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.047060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.047071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.047260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.047462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.047472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.047759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.047771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.047963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.047974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.048168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.048179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.048377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.048389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.048615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.048627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.048966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.048977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.049288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.049299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.049644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.049655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.049983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.049995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.050317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.050328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.050653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.050665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.050982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.051190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.051202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.051420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.051431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.051635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.051646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.052078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.052420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.052431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.052826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.052837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.451 qpair failed and we were unable to recover it. 00:39:28.451 [2024-07-22 10:55:34.053153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.451 [2024-07-22 10:55:34.053165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.053509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.053523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.053714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.053725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.054082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.054093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.054326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.054337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.054568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.054578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.054903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.055242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.055253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.055666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.055678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.055863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.055874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.056171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.056182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.056506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.056517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.056821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.056833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.057136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.057146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.057477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.057489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.057824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.057836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.058027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.058038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.058088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.058100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.058394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.058415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.058736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.058747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.059953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.059965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.060166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.060177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.060497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.060508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.060852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.061052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.061063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.061410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.061734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.061745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.062093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.062104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.062450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.062461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.062813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.062824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.063176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.063186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.063562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.063573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.063894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.063905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.064256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.064268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.064590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.064603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.064832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.064842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.452 [2024-07-22 10:55:34.065102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.452 [2024-07-22 10:55:34.065113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.452 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.065331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.065341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.065647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.065658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.065806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.065817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.065868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.065878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.066168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.066179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.066518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.066529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.066825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.066835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.067156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.067167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.067480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.067491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.067845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.067855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.068203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.068214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.068506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.068517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.068842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.068854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.069165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.069179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.069525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.069537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.069728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.069740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.070100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.070111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.070459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.070470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.070659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.070669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.071019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.071031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.071359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.071369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.071721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.071732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.072079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.072089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.072415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.072426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.072728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.072739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.073072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.073083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.073432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.073445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.073781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.073792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.074118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.074128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.074481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.074492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.074842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.074853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.075151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.075161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.075491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.075502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.075853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.075865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.076212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.076223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.076562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.076574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.076751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.076762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.076953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.076965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.077325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.077337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.077670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.453 [2024-07-22 10:55:34.077879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.453 [2024-07-22 10:55:34.077891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.453 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.078196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.078207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.078296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.078305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.078408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.078420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.078739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.078750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.078953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.078964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.079313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.079323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.079660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.079671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.079854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.079864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.080169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.080180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.080502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.080512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.080863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.080874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.081203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.081214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.081514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.081525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.081823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.081834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.082198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.082210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.082413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.082425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.082764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.082775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.082970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.082981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.083323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.083334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.083632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.083643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.083979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.084285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.084295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.084613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.084624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.084947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.084958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.085254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.085264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.085588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.085600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.085951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.085962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.086288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.086299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.086648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.086659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.087055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.087066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.087383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.087393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.087564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.087576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.087881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.087891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.088228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.088239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.088556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.088567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.088906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.088918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.089109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.089120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.089418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.089429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.089625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.089635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.089911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.454 [2024-07-22 10:55:34.089922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.454 qpair failed and we were unable to recover it. 00:39:28.454 [2024-07-22 10:55:34.090109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.090122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.090324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.090335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.090540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.090550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.090845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.090855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.091048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.091059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.091387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.091406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.091764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.091775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.092107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.092118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.092349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.092359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.092668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.092680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.093029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.093040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.093362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.093373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.093697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.093709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.094061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.094073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.094424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.094436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.094789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.094800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.095125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.095136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.095449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.095459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.095805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.095817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.096088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.096099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.096150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.096159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.096466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.096477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.096793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.096803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.096859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.096868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.097160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.097170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.097472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.097484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.097710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.097720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.098041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.098054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.098453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.098657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.098668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.099000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.099011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.099369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.099380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.099724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.099735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.100052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.100063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.455 [2024-07-22 10:55:34.100407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.455 [2024-07-22 10:55:34.100418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.455 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.100754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.100764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.101113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.101123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.101172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.101181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.101482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.101493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.101707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.101718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.102068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.102079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.102436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.102447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.102777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.102789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.103127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.103139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.103452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.103463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.103823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.103834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.104157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.104167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.104483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.104494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.104809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.104821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.105173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.105184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.105374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.105385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.105700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.105711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.106066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.106076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.106423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.106434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.106792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.106803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.107096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.107108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.107441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.107452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.107780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.107980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.107992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.108325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.108336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.108659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.108670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.108861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.109087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.109098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.109285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.109296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.109613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.109624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.109816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.109826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.110012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.110023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.110360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.110370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.110557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.110569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.110936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.110946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.111270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.111281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.111642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.111654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.111965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.111976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.112323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.112334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.112522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.112534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.456 [2024-07-22 10:55:34.112722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.456 [2024-07-22 10:55:34.112732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.456 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.113062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.113073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.113419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.113430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.113773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.113784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.113980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.113991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.114303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.114498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.114509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.114720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.114730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.115016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.115027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.115327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.115338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.115734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.115745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.116077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.116088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.116412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.116423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.116618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.116629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.116812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.116822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.117138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.117149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.117474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.117485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.117842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.118170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.118182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.118582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.118594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.118912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.118925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.119235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.119245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.119592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.119603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.119806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.119819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.120160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.120171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.120469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.120480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.120711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.120723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.457 [2024-07-22 10:55:34.121051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.457 [2024-07-22 10:55:34.121063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.457 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.121401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.121414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.121783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.121794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.122113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.122385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.122399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.122702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.122713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.123063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.123075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.123258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.123270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.123578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.123589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.123930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.123941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.124290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.124301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.124630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.124641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.124967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.124978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.125308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.125319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.125645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.125656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.126011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.126023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.126406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.126417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.126757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.126767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.127116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.127128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.127474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.127485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.127668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.127681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.128031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.128044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.128390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.128404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.128765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.128776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.128968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.128978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.129317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.129327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.129654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.129665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.130018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.130029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.130371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.130382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.130736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.130747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.130938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.130948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.131334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.131345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.131623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.131634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.131973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.131985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.132108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.132119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.132418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.132429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.132619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.132630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.132933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.132944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.133295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.133306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.133630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.133641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.133963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.133974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.134163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.134174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.134484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.134495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.134852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.134863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.135179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.135190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.135523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.135533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.135719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-07-22 10:55:34.135730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-07-22 10:55:34.136052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.136065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.136391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.136406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.136733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.136744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.136932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.136943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.137296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.137308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.137654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.137665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.138003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.138014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.138359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.138369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.138562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.138574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.138806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.138818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.139042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.139052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.139218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.139229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.139573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.139584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.139926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.139937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.140246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.140257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.140619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.140630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.140940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.140950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.141270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.141281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.141606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.141966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.141976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.142277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.142288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.142634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.142971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.142983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.143335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.143345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.143676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.143687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.144010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.144020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.144345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.144356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.144545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.144556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.144958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.144969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.145284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.145640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.145652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.145954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.145965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.146309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.146320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.146547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.146558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.146888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.146898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.146948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.146958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.147125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.147136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.147482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.147493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.147683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.147694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.148021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.148032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.148222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.148234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.148554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.148565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.148865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.148875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.149041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.149052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.149403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.149415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.149752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.149762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.150085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.150096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.150417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.150428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.150749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.150760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.150952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.150963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.151154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.151164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.151371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.151381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.151643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.151654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.151847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.151858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.152051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.152061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.152231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.152242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.152564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.152576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.152766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.152777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-07-22 10:55:34.153125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-07-22 10:55:34.153135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.153365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.153375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.153695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.153706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.154053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.154064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.154402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.154414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.154709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.154720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.154912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.154923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.155229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.155240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.155565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.155576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.155943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.155954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.156143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.156155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.156482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.156493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.156811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.156821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.157143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.157154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.157456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.157468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.157525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.157535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.157591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.157601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.157957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.157967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.158292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.158302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.158669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.158680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.159023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.159034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.159288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.159299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.159487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.159497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.159824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.159834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.160006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.160017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.160339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.160349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.160671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.160682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.160873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.160885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.161079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.161090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.161402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.161750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.161761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.161950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.161960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.162303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.162313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.162642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.162653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.163021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.163032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.163205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.163216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.163553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.163564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.163891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.163903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.164234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.164244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.164584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.164595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.164946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-07-22 10:55:34.164956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-07-22 10:55:34.165289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.165300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.165485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.165497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.165811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.165822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.166019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.166030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.166375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.166385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.166582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.166593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.166904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.166914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.167269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.167279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.167662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.167673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.167991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.168002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.168350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.168362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.168710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.169048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.169059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.169251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.169261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.169431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.169441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.169804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.169814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.170144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.170156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.170518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.170529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.170881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.170892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.171239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.171249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.171576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.171587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.171948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.171959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.172307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.172318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.172714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.172726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.172920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.172932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.173213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.173224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.173421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.173432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.173529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.173716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.173726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.173932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.173943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.174093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.174103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.174417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.174428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.174861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.174872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.175197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.175209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.175546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.175557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.175876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.175886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.176236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.176611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.176623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.176966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.176977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.177179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.177190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.177382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.177393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.177569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.177580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.177892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.177902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.178220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.178231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.178435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.178447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.178665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.178676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.179006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.179018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.179221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.179232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.179414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.179719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.179729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.179915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.179926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.180129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.180140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.180489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.180501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.180713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.180724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-07-22 10:55:34.180850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-07-22 10:55:34.180861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.181162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.181173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.181333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.181345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.181534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.181545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.181876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.181886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.182237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.182247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.182576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.182587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.182780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.182791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.183119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.183130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.183331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.183342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.183696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.183711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.183901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.183912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.184246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.184257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.184455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.184467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.184815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.184826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.185035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.185045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.185393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.185772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.185783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.186089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.186100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.186426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.186437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.186799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.186811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.187159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.187169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.187459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.187469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.187812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.187822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.188147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.188158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.188354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.188365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.188556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.188566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.188879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.188889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.189204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.189215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.189409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.189420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.189814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.189825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.190155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.190166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.190491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.190502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.190814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.190825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.191176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.191187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.191511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.191523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.191855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.191866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.192213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.192225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.192412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.192424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.192611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.192622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.192808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.193147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.193158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.193514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.193525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.193851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.193862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.194199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.194209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.194344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.194354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.194679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.194689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.195016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.195027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.195224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.195235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.195557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.195568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.195758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.195769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.195951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.195962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.196224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.196235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.196582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.196593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.196946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.196956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.197302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.197313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.197638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.197650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.198000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.198011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.198365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.198375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.198691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.198703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.199029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.199040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.199392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.199407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.199725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.199736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.200067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-07-22 10:55:34.200077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-07-22 10:55:34.200404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.200418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.200779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.200789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.201111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.201122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.201451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.201462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.201786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.201797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.202144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.202155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.202206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.202216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.202507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.202519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.202869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.202880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.203067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.203078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.203407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.203417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.203764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.203775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.204093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.204104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.204427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.204437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.204775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.204786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.205135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.205145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.205468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.205808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.205819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.206168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.206180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.206537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.206548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.206891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.206901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.207245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.207256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.207605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.207616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.207961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.207972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.208295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.208306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.208656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.208668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.208834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.208846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.209032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.209043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.209346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.209357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.209707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.209719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.209905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.209916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.210260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.210270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.210448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.210460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.210794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.210805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.211153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.211163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.211517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.211528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.211875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.212208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.212219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.212544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.212555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.212966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.212977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.213382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.213393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.213577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.213589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.213796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.213807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.214127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.214139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.214408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.214420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.214763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.215122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.215133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.215331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.215342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.215665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.215677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.215866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.215877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.216219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.216229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.216531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.216542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.216860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.216871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.217202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-07-22 10:55:34.217213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-07-22 10:55:34.217564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.217576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.217914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.217924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.218248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.218259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.218455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.218466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.218796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.218806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.219157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.219168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.219490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.219826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.219837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.220186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.220198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.220405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.220417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.220748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.220759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.221086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.221096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.221398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.221410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.221732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.221742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.221978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.221991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.222315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.222326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.222648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.222659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.223010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.223021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.223348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.223359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.223682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.223693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.223887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.223898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.224219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.224230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.224423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.224436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.224735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.224745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.225054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.225064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.225416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.225427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.225769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.225780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.225933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.225944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.226298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.226310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.226505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.226864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.226874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.227183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.227193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.227428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.227439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.227680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.227690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.228052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.228063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.228386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.228728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.228740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.229095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.229106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.229434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.229445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.229783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.230141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.230151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.230461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.230473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.230816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.230828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.231152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.231164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.231497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.231856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.231867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.232194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.232205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.232399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.232410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.232745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.232755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.233105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.233116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.233446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.233457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.233783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.233794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.233990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.234001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.234359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.234369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.234706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.234717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.235044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.235055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.235404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.235415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.235750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-07-22 10:55:34.235760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-07-22 10:55:34.235951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.235963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.236286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.236296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.236630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.236641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.236835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.236845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.237187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.237198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.237366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.237378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.237776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.237787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.238100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.238111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.238436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.238449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.238772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.238782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.239081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.239091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.239429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.239440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.239775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.239785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.240191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.240202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.240516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.240528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.240876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.240887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.241119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.241409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.241420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.241613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.241624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.241799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.241810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.241990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.242000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.242295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.242306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.242495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.242505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.242724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.242734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.243066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.243076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.243404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.243415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.243753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.243764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.244108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.244120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.244312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.244323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.244490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.244501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.244811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.244822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.245172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.245182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.245373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.245384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.245710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.245722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.246071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.246082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.246388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.246402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.246765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.246775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.247088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.247099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.247423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.247435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.247621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.247633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.247835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.247847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.248187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.248198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.248543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.248554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.248604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.248614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.248780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.248791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.249098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.249109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.249299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.249310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.249479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.249490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.249830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.249840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.250163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.250173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.250501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.250513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.250859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.250873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.251231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.251241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.251559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.251569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.251894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.251905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.252256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.252268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.252461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.252472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.252777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.252787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.253128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.253139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.253336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.253347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.253673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-07-22 10:55:34.253872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-07-22 10:55:34.253883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.254053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.254063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.254375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.254385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.254734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.254746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.255070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.255081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.255267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.255278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.255487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.255497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.255799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.255810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.256136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.256147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.256479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.256490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.256822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.256832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.257026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.257037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.257380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.257390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.257741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.257752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.258112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.258122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.258313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.258324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.258674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.258684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.259009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.259021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.259370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.259381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.259675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.259686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.259741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.259749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.260045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.260056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.260454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.260465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.260781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.260792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.260978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.260988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.261299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.261311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.261497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.261508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.261834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.261844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.262195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.262205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.262405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.262416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.262582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.262590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.262786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.262796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.262987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.262998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.263219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.263230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.263511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.263522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.263890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.263900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.264092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.264102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.264329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.264341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.264630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.264641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.264988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.264999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.265352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.265363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.265762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.265773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.266088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.266099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.266300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.266311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.266525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.266538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.266886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.266897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.267214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.267225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.267571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.267582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.267930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.267941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.268266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.268277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.268661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.268672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.269017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.269028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.269380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.269391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.269735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.269746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.269796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.269806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-07-22 10:55:34.270134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-07-22 10:55:34.270145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.270340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.270351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.270549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.270560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.270770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.270972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.270984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.271296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.271307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.271504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.271514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.271705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.271715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.272019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.272029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.272354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.272364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.272714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.272726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.272959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.272970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.273255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.273266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.273584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.273595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.273798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.273809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.274007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.274018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.274328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.274339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.274705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.274716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.274916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.274926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.275249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.275259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.275573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.275583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.275772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.275782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.276082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.276093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.276280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.276291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.276462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.276473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.276801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.276813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.277109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.277120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.277443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.277454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.277641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.277652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.277965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.277976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.278327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.278339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.278697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.278708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.279028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.279038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.279477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.279488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.279832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.280217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.280227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.280597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.280609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.280798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.280809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.281003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.281384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.281399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.281722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.281732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.282082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.282093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.282444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.282456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.282776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.282786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.283154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.283164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.283354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.283364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.283551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.283563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.283839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.283850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.284083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.284094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.284386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.284400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.284726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.284737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.285029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.285040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.285224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.285234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.285536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.285547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.285906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.285917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.286104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.286115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.286400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.286410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.286719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.286731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.287112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.287124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.287438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.287450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.287778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.287789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.288138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.288149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.288497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.288508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.288864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.288875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.289202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.289213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.289566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.289577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.289887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.289897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-07-22 10:55:34.290228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-07-22 10:55:34.290238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.290568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.290579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.290898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.290908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.291202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.291213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.291538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.291550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.291872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.291884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.292183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.292194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.292501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.292512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.292814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.292825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.293139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.293150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.293505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.293516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.293803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.293814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.294010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.294021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.294304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.294314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.294707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.294718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.295071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.295082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.295401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.295412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.295752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.295764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.296111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.296121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.296437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.296448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.296792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.296802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.297200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.297211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.297524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.297535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.297888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.297899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.298297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.298308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.298623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.298634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.298986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.298996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.299346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.299357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.299756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.299767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.300036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.300046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.300239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.300249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.300576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.300587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.300912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.300922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.301108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.301119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.301437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.301448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.301819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.301830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.302146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.302157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.302481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.302491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.302849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.302860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.303214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.303224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.303523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.303534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.303709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.303720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.304056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.304066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.304421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.304432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.304769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.304779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.305119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.305130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.305321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.305332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.305656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.305667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.305866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.305877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.306215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.306225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.306574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.306585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.306935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.306946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.307269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.307280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.307635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.307645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.307995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.308006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.308355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.308366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.308679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.308690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-07-22 10:55:34.309014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-07-22 10:55:34.309026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.309369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.309381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.309696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.309707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.310051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.310062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.310386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.310406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.310762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.310773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.311118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.311129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.311529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.311540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.311876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.311886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.312201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.312211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.312538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.312549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.312892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.312903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.313223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.313234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.313585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.313596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.313947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.313958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.314300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.314310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.314494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.314506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.314695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.314706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.315027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.315037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.315229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.315241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.315574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.315585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.315933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.315943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.316292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.316303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.316654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.316665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.316858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.316869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.317178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.317189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.317541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.317552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.317925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.317936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.318257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.318269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.318565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.318576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.318925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.318935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.319257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.319267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.319619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.319630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.319865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.319876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.320122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.320133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.320345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.320356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.320529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.320540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.320671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.320681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.320857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.320867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.321058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.321069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.321268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.321279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.321597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.321608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.321738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.321748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.322063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.322074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.322380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.322390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.322746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.322756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.323102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.323114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.323307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.323317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.323485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.323497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.323823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.323834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.324192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.324203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.324386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.324402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.324689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.325061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.325072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.325257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.325268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.325579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.325592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.325760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.325771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.326078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.326088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.326440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.326451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.326642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.326653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-07-22 10:55:34.326827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-07-22 10:55:34.326837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.327007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.327017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.327306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.327316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.327705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.327715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.328037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.328049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.328400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.328733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.328743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.328932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.328943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.329277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.329287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.329631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.329641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.329952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.329964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.330279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.330290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.330530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.330541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.330725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.331047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.331058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.331384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.331400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.331720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.331731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.331961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.331972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.332188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.332199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.332521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.332532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.332868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.332878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.333235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.333246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.333590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.333603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.333929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.333940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.334263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.334274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.334627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.334638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.334985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.334996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.335318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.335330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.335567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.335579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.335927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.335937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.336284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.336294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.336648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.336659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.336846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.336857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.337047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.337057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.337380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.337391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.337720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.337731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.337927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.337937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.338128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.338139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.338342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.338352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.338685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.338696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.339018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.339028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.339372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.339737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.339748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.340080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.340091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.340427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.340438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.340780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.340791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.341146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.341157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.341478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.341489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.341668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.341678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.341871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.341882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.342231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.342242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.342565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.342576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.342765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.342776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.342982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.342994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.343217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.343228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.343494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.343506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.343838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.343849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-07-22 10:55:34.344179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-07-22 10:55:34.344189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.344538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.344549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.344854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.344864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.344915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.344923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.345081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.345091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.345405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.345461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.345471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.345772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.346102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.346113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.346305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.346316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.346649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.346660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.347052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.347063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.347376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.347387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.347740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.347750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.348101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.348111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.348459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.348470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.348813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.348823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.349122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.349133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.349482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.349492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.349671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.349681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.350027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.350037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.350227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.350238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.350431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.350441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.350700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.350712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.351037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.351047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.351374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.351384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.351710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.351720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.351920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.351931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.352254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.352264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.352575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.352585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.352922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.352933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.353277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.353288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.353665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.353676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.353999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.354011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.354140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.354150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.354386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.354411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.354735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.354745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.354985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.354996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.355291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.355302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.355534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.355545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.355868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.355879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.356201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.356212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.356565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.356576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.356924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.356935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.357258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.357268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.357484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.357495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.357800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.357810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.358164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.358175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.358585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.358596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.358788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.358799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.358972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.358982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.359290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.359301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.359652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.359663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.359987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.359998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.360386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.360401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.360735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.361072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.361083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.361441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.361452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.361769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-07-22 10:55:34.361780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-07-22 10:55:34.362092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.362103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.362333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.362346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.362674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.362685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.363040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.363050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.363243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.363254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.363602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.363614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.363914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.363925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.364257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.364268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.364604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.364615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.364807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.364817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.365155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.365166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.365516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.365528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.365842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.365853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.366178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.366189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.366378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.366388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.366611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.366623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.366819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.366830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.366995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.367006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.367339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.367350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.367702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.367714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.368063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.368074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.368409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.368420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.368754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.368765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.369064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.369075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.369266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.369276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.369604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.369616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.369932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.369943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.370295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.370306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.370631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.370642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.370982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.370993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.371352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.371363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.371711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.371722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.372072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.372082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.372425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.372435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.372655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.372666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.372970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.372982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.373175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.373185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.373387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.373402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.373568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.373579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.373776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.373787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.373973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.373985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.374318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.374328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.374662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.375986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.375995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.376360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.376370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.376686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.376697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.376888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.376899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.377108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.377120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.377313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.377323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.377669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.377680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.377876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.377887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.378132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.378143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.378492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.378692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.378703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.379051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.379061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.379262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.379273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.379614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.379625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.379970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.379981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.380348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.380360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.380705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.380716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.381077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-07-22 10:55:34.381088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-07-22 10:55:34.381421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.381432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.381785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.381795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.382112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.382122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.382477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.382490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.382810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.382820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.383146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.383157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.383507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.383518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.383703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.383713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.383898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.383908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.384256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.384267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.384606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.384617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.384973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.384983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.385174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.385185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.385241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.385251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.385532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.385543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.385893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.385904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.386192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.386203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.386391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.386413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.386728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.386738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.387088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.387099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.387430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.387442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.387665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.387676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.388013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.388024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.388342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.388352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.388751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.388763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.389077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.389088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.389383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.389398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.389603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.389615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.389908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.389918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.390244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.390255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.390596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.390609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.390963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.390973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.391298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.391309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.391498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.391509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.391867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.391877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.392189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.392200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.392533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.392544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.392915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.392927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.393275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.393287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.393482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.393494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.393792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.393803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.394016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.394027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.394219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.394231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.394568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.394579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.394927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.394938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.395272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.395283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.395621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.395632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.395985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.395996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.396179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.396190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.396246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.396257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.396556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.396567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.396756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.396768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.396967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.396978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.397309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.397319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.397582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.397593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.397896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.397907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.398094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.398105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.398429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.398442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.398622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.398634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.398984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.398995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.399187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.399197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.399547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.399559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.399898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-07-22 10:55:34.399908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-07-22 10:55:34.400104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.400115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.400437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.400448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.400776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.400786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.401113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.401124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.401484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.401495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.401812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.401823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.402192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.402202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.402562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.402573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.402896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.402907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.403091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.403102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.403412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.403423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.403664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.403674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.403980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.403990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.404299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.404310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.404499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.404511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.404736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.404747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.405053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.405063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.405412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.405422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.405620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.405632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.405780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.405790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.406062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.406072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.406271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.406282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.406582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.406593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.406915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.406926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.407271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.407282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.407621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.407632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.407825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.407835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.408121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.408132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.408320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.408331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.408520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.408530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.408732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.408743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.408934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.408945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.409263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.409273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.409478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.409489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.409677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.409687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.409855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.409867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.410187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.410198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.410547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.410558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.410915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.410926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.411274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.411284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.411623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.411634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.411980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.411992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.412184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.412194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.412455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.412466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.412814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.412825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.413026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.413036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.413376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.413387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.413578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.413590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.413762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.413773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-07-22 10:55:34.414114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-07-22 10:55:34.414125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.414464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.414476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.414667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.414678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.414892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.414902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.415116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.415126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.415447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.415459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.415779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.415789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-07-22 10:55:34.416140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-07-22 10:55:34.416151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.416500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.416513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.416856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.416867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.417225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.417236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.417626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.417637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.417959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.417969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.418305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.418317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.418642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.418653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.419004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.419015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.419364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.419376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.419565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.419576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.419924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.419936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.420281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.420292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.420619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.420630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.420926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.420937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.421222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.421233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.421582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.421594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.421949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.421959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.422282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.422293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.422639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.422651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.423001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.423013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.423360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.423371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.423571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.423582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.423784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.423796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.423995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.424006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.424357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.424367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.424695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.424707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.424902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.424913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.425178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.425189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.425245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.425255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.425308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.425318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.425482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.425493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.425808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.425819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.426198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.426211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.426409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.426420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.426750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.426761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.013 [2024-07-22 10:55:34.427089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.013 [2024-07-22 10:55:34.427100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.013 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.427426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.427437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.427770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.427781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.428132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.428142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.428336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.428347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.428671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.428683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.429011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.429022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.429371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.429382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.429771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.429782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.430102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.430113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.430467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.430478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.430833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.430845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.431045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.431055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.431346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.431357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.431707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.431719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.432040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.432051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.432389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.432404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.432747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.432758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.433104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.433115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.433471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.433482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.433824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.433834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.434163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.434173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.434534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.434546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.434903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.434914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.435222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.435233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.435428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.435439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.435601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.435612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.435777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.435787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.436126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.436136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.436506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.436854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.436865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.437248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.437259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.437610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.437621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.437761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.437771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.438103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.438113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.438431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.438442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.438665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.438676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.439078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.439088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.439410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.439774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.439784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.440111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.440121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.014 [2024-07-22 10:55:34.440451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.014 [2024-07-22 10:55:34.440463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.014 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.440783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.440793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.441157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.441167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.441495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.441507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.441830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.441841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.442229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.442239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.442582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.442592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.442911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.442922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.443153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.443163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.443473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.443486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.443657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.443670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.444004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.444014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.444353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.444364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.444564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.444575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.444904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.444916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.445261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.445271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.445575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.445588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.445937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.445948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.446301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.446312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.446661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.446672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.446995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.447005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.447360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.447371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.447725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.447736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.448044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.448401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.448415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.448722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.448733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.448922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.449162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.449173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.449475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.449486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.449808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.449819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.450177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.450188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.450512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.450523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.450850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.450861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.451218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.451228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.451467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.451478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.451665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.451676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.451869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.451880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.452094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.452104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.452441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.452452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.452619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.452631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.452923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.452934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.453111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.015 [2024-07-22 10:55:34.453122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.015 qpair failed and we were unable to recover it. 00:39:29.015 [2024-07-22 10:55:34.453288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.453300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.453593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.453604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.453803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.453814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.454141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.454152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.454501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.454512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.454750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.454760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.455084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.455095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.455417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.455429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.455772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.455784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.456111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.456124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.456322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.456333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.456660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.456671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.457024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.457035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.457361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.457373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.457753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.457765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.458085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.458096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.458451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.458462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.458561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.458572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.458936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.458947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.459284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.459295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.459624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.459635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.459981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.459992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.460181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.460192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.460508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.460520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.460719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.460731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.460884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.460894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.461115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.461125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.461452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.461463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.461662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.461673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.461821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.461831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.461880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.461891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.462184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.462196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.462389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.462404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.462754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.462765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.463094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.463105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.463463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.463841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.464178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.464189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.464559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.464570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.464898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.464910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.465194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.016 [2024-07-22 10:55:34.465205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.016 qpair failed and we were unable to recover it. 00:39:29.016 [2024-07-22 10:55:34.465372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.465384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.465727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.465739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.466060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.466070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.466417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.466428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.466742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.466753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.467074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.467086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.467414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.467425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.467845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.467857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.468050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.468061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.468353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.468364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.468691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.468702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.469050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.469060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.469295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.469306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.469574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.469585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.469913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.470280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.470292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.470581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.470592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.470783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.470793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.470971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.470981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.471164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.471175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.471501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.471512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.471862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.471873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.472061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.472073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.472283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.472295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.472620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.472631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.472933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.472944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.473151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.473162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.473514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.473525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.473688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.473700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.473922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.473933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.474275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.474286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.474484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.474495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.474666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.474677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.475020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.475031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.017 [2024-07-22 10:55:34.475353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.017 [2024-07-22 10:55:34.475365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.017 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.475711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.475723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.475909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.476092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.476102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.476365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.476376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.476686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.476696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.476881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.476892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.477203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.477214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.477472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.477483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.477676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.477687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.477861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.477873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.478199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.478210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.478401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.478413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.478710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.478720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.479076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.479087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.479432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.479443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.479773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.479784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.479978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.479988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.480214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.480225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.480410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.480421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.480756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.480766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.481117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.481127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.481479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.481491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.481819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.481830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.482159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.482170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.482403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.482414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.482772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.482783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.483110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.483122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.483445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.483456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.483801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.483814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.484161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.484172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.484360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.484371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.484701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.484712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.485066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.485076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.485433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.485443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.485727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.485739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.486101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.486112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.486464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.486475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.486825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.486836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.487158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.487168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.487363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.487374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.018 [2024-07-22 10:55:34.487506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.018 [2024-07-22 10:55:34.487518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.018 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.487888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.487898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.488227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.488238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.488437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.488450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.488558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.488567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c098a0 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.488654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17820 is same with the state(5) to be set 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Read completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 Write completed with error (sct=0, sc=8) 00:39:29.019 starting I/O failed 00:39:29.019 [2024-07-22 10:55:34.488871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.019 [2024-07-22 10:55:34.489189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.489201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.489623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.489652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.489993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.490003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.490158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.490169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.490499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.490509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.490876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.490885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.491081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.491089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.491263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.491272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.491587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.491596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.491942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.491950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.492138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.492147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.492462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.492470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.492809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.492817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.493148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.493157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.493348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.493356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.493715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.493724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.494053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.494064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.494412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.494420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.494781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.494789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.494974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.494983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.495276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.495284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.495588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.495596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.495922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.495931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.496291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.496299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.496643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.019 [2024-07-22 10:55:34.496651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.019 qpair failed and we were unable to recover it. 00:39:29.019 [2024-07-22 10:55:34.496982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.496990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.497347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.497355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.497596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.497606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.497913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.497922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.498237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.498245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.498583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.498591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.498924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.498932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.499294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.499302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.499505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.499678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.499686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.499974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.499982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.500300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.500307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.500510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.500519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.500569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.500628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.500636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.500918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.500926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.501237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.501246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.501587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.501595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.501943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.501951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.502274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.502282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.502621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.502629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.502819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.502827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.503103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.503111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.503460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.503468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.503786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.503794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.504116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.504123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.504319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.504326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.504627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.504636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.504911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.504919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.505239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.505247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.505641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.505648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.505838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.505848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.506152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.506160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.506504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.506512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.506718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.506726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.507046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.507054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.507366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.507373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.507689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.507697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.507887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.507895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.508110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.508118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.020 [2024-07-22 10:55:34.508290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.020 [2024-07-22 10:55:34.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.020 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.508495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.508503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.508670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.508678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.508861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.508869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.509168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.509504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.509512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.509832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.509840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.510061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.510068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.510400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.510408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.510758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.510766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.511102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.511110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.511447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.511455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.511785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.511793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.512116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.512123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.512439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.512447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.512495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.512502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.512789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.512796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.512981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.512989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.513326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.513334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.513599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.513607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.514013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.514021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.514331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.514339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.514562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.514569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.514906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.514914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.515229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.515237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.515586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.515593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.515909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.515917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.516252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.516260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.516622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.516629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.516987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.516995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.517186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.517194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.517489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.517499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.517840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.517847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.518250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.518258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.518436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.518445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.021 [2024-07-22 10:55:34.518784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.021 [2024-07-22 10:55:34.518792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.021 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.519137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.519145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.519371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.519378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.519580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.519588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.519931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.519939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.520123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.520130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.520481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.520489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.520880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.520888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.521206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.521214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.521405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.521413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.521721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.521728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.522060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.522068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.522415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.522423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.522610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.522618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.522930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.522938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.523254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.523262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.523578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.523586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.523919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.523927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.524301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.524308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.524620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.524628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.524936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.524944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.525242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.525250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.525606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.525614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.525937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.525946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.526278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.526285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.526616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.526623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.526821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.526828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.527175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.527182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.527384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.527392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.527616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.527625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.527993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.528002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.528353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.528361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.528544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.528554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.528873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.528881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.529198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.529205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.529439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.529447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.529654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.529663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.529866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.529874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.530202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.530209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 [2024-07-22 10:55:34.530559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.530568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:29.022 [2024-07-22 10:55:34.530915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.022 [2024-07-22 10:55:34.530925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.022 qpair failed and we were unable to recover it. 00:39:29.022 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:29.023 [2024-07-22 10:55:34.531246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.531255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:29.023 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:29.023 [2024-07-22 10:55:34.531578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.531588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 [2024-07-22 10:55:34.531913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.531923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.532101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.532108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.532333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.532341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.532526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.532534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.532779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.532787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.533076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.533084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.533427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.533435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.533783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.533791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.534111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.534119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.534473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.534481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.534837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.534845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.535159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.535168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.535363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.535371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.535757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.535766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.536096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.536104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.536448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.536456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.536838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.536846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.537186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.537194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.537523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.537532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.537869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.538202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.538211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.538537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.538546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.538884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.538892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.539213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.539221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.539414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.539423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.539764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.539772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.539961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.539969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.540140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.540148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.540438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.540447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.540615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.540625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.540790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.540798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.541108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.541119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.541316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.541324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.541626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.541634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.541830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.541839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.542167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.542175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.542484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.023 [2024-07-22 10:55:34.542494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.023 qpair failed and we were unable to recover it. 00:39:29.023 [2024-07-22 10:55:34.542830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.542839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.542885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.542893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.543212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.543221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.543575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.543583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.543769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.543778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.544070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.544078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.544427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.544435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.544763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.544771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.545124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.545132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.545488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.545496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.545856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.545865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.546192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.546200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.546391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.546402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.546735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.546744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.547064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.547072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.547428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.547437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.547754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.547762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.548085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.548093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.548447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.548455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.548643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.548652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.548951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.548959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.549159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.549170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.549378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.549387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.549781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.549790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.550104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.550112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.550436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.550444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.550633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.550640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.550968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.550976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.551271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.551279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.551481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.551489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.551697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.551704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.551883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.551891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.552145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.552154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.552499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.552508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.552846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.552854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.553042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.553050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.553427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.553436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.553758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.553766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.554095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.554102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.554404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.554412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.554755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-07-22 10:55:34.554763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-07-22 10:55:34.555094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.555102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.555370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.555378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.555566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.555800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.555808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.556129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.556137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.556477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.556486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.556805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.557129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.557465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.557475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.557797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.557805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.558152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.558160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.558508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.558517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.558706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.558715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.558887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.558896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.559088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.559097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.559273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.559280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.559600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.559608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.559929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.559937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.560256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.560264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.560614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.560623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.560944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.560953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.561146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.561155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.561483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.561491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.561891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.561899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.562074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.562082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.562429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.562437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.562817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.562825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.563141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.563149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.563499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.563507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.563825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.563834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.564013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.564021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.564200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.564208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.564485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.564494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.564758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.564766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.565085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.565093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.565264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.565273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.565476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-07-22 10:55:34.565678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-07-22 10:55:34.565694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.566013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.566021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.566211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.566221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.566402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.566410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.566766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.566774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.567105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.567112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.567471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.567480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.567803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.567811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.568130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.568138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.568448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.568456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.568830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.026 [2024-07-22 10:55:34.569164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.569173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:29.026 [2024-07-22 10:55:34.569500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.569510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.026 [2024-07-22 10:55:34.569831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.569841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.026 [2024-07-22 10:55:34.570167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.570176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.570368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.570376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.570697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.570706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.571030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.571038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.571390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.571403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.571729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.571737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.571923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.571932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.572103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.572113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.572311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.572319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.572509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.572517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.572839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.572846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.573174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.573182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.573500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.573508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.573704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.573711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.574038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.574046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.574242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.574250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.574419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.574427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.574715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.574723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.575061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.575069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.575417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.575425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.575775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.575783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.576109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.576117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.576466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.576474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-07-22 10:55:34.576796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-07-22 10:55:34.576804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.577072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.577081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.577433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.577442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.577779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.577788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.578105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.578112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.578431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.578439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.578742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.578751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.579069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.579077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.579271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.579278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.579457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.579465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.579658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.579665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.580006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.580014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.580236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.580246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.580563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.580570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.580915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.580923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.581250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.581258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.581575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.581583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.581894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.581902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.582218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.582226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.582578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.582586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.582803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.582812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.582975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.582983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.583298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.583305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.583611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.583619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.583777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.583787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.584100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.584108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.584416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.584425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.584814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.584822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.585142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.585150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.585497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.585505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.585849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.585857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.586004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.586012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.586258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.586266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.586588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.586596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.586921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.586930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.587261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.587269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.587436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.587444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.587617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.587625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 Malloc0 00:39:29.027 [2024-07-22 10:55:34.587969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.587978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-07-22 10:55:34.588315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.588323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.027 [2024-07-22 10:55:34.588698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-07-22 10:55:34.588707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.028 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:29.028 [2024-07-22 10:55:34.589057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.589065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.028 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.028 [2024-07-22 10:55:34.589394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.589406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.589762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.589770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.590064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.590072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.590400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.590408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.590605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.590614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.590898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.590905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.591091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.591099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.591278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.591287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.591601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.591608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.591865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.591873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.592061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.592070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.592408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.592417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.592621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.592629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.592826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.592835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.593158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.593166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.593211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.593217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.593536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.593543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.593875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.593883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.594235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.594242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.594580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.594588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.594774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.594782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.595007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.595015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.595119] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.028 [2024-07-22 10:55:34.595242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.595562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.595570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.595742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.595750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.595982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.595990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.596180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.596189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.596541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.596548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.596867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.596875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.597914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.597920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.598241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.598249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.598444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.598452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.598747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-07-22 10:55:34.598755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-07-22 10:55:34.598961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.598968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.599299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.599306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.599599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.599607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.599900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.599908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.600268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.600275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.600615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.600623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.600935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.600942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.601290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.601298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.601546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.601555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.601895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.601903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.602245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.602253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.602412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.602420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.602721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.602728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.603059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.603067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.603405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.603413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.603595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.603603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.603777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.603785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.604060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.604068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.029 [2024-07-22 10:55:34.604255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.604263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:29.029 [2024-07-22 10:55:34.604622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.604631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.029 [2024-07-22 10:55:34.604953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.604961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.029 [2024-07-22 10:55:34.605367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.605375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.605611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.605619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.605922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.606282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.606290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.606633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.606640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.606992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.607000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.607350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.607358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.607687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.607695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.608012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.608020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.608258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.608266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.608593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.608601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.608897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.608904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.609252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.609259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.609582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.609591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.609917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.609925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.610229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.610237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.610553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.610561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.610742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-07-22 10:55:34.610750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-07-22 10:55:34.611048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.611056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.611256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.611264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.611460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.611468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.611818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.611826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.612146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.612154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.612340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.612349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.612657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.612665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.612992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.613000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.613320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.613329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.613699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.613707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.614053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.614061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.614409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.614716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.614724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.614918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.614926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.615178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.615496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.615866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.615873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.616186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b9 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.030 0 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.616444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.616452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.616625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.616633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:29.030 [2024-07-22 10:55:34.616935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.616943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.030 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.030 [2024-07-22 10:55:34.617260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.617268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.617539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.617547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.617769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.617777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.618096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.618103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.618430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.618438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.618632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.618640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.618986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.618993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.619331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.619340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.619682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.619690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.620007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.620015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.620347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.620355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.620677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.620685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.621042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.621051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-07-22 10:55:34.621090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-07-22 10:55:34.621096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.621418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.621426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.621665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.621672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.621854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.621862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.622132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.622139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.622313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.622322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.622611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.622619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.622816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.622823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.623020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.623029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.623329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.623337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.623520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.623529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.623860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.623868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.624212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.624220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.624413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.624421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.624722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.624730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.625085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.625092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.625439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.625447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.625790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.625798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.625994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.626002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.626307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.626314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.626537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.626545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.626878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.626886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.627215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.627222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.627547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.627555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.627854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.627862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.628181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.628188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.031 [2024-07-22 10:55:34.628384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.628392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.031 [2024-07-22 10:55:34.628691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.628699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.031 [2024-07-22 10:55:34.629024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.629032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.031 [2024-07-22 10:55:34.629371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.629379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.629684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.629692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.630005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.630013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.630163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.630179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.630538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.630546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.630866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.630875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.631097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.631105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.631437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.631445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.631863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.631871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.632192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.632200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-07-22 10:55:34.632446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-07-22 10:55:34.632454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.632696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.632703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.633031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.633039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.633401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.633409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.633629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.633636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.633956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.634206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.634215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.634418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.634426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.634784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.634792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.634984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.634993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.635336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-07-22 10:55:34.635344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1658000b90 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.635379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.032 [2024-07-22 10:55:34.645944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.646011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.646025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.646031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.646036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.646049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.032 10:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2265075 00:39:29.032 [2024-07-22 10:55:34.655787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.655846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.655858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.655863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.655867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.655878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.665864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.665922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.665934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.665939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.665943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.665954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.675906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.675970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.675981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.675986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.675990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.676003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.685918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.686000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.686011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.686016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.686021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.686031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-07-22 10:55:34.695899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.032 [2024-07-22 10:55:34.695947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.032 [2024-07-22 10:55:34.695959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.032 [2024-07-22 10:55:34.695964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.032 [2024-07-22 10:55:34.695968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.032 [2024-07-22 10:55:34.695978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.705965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.706013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.706025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.706030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.706034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.706045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.715975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.716025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.716036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.716041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.716046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.716056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.725994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.726057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.726071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.726076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.726081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.726091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.736053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.736126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.736137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.736141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.736146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.736156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.746050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.746101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.746111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.746116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.746121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.746131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.756146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.756204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.756222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.756228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.756233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.756247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.766155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.766213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.766232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.766237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.766245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.766259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.776052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.776101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.776113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.776119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.776123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.776134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.786208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.786265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.786277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.786282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.786286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.786297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.796191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.796248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.796267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.796272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.796277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.796291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.806143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.806197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.806209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.806215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.806219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.806231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.816144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.816198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.816209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.816214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.816219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.816230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.826166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.826237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.826249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.826254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.826259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.826269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.836359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.293 [2024-07-22 10:55:34.836421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.293 [2024-07-22 10:55:34.836432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.293 [2024-07-22 10:55:34.836437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.293 [2024-07-22 10:55:34.836442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.293 [2024-07-22 10:55:34.836452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.293 qpair failed and we were unable to recover it. 00:39:29.293 [2024-07-22 10:55:34.846361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.846413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.846424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.846428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.846433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.846443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.856428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.856501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.856512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.856516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.856524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.856535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.866408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.866454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.866465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.866470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.866474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.866484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.876426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.876480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.876490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.876495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.876499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.876509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.886499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.886607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.886617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.886622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.886627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.886637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.896616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.896686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.896696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.896701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.896706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.896716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.906562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.906612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.906623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.906628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.906632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.906642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.916598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.916653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.916664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.916668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.916673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.916683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.926614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.926670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.926681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.926686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.926691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.926701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.936692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.936765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.936776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.936781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.936785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.936795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.946506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.946554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.946565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.946573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.946577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.946588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.956645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.956697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.956708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.956713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.956717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.956728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.966660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.966763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.966774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.966779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.966784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.966794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.976704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.976754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.976766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.976771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.976775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.976786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.294 [2024-07-22 10:55:34.986612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.294 [2024-07-22 10:55:34.986667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.294 [2024-07-22 10:55:34.986678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.294 [2024-07-22 10:55:34.986683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.294 [2024-07-22 10:55:34.986687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.294 [2024-07-22 10:55:34.986697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.294 qpair failed and we were unable to recover it. 00:39:29.556 [2024-07-22 10:55:34.996733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:34.996784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:34.996794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:34.996800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:34.996804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:34.996814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.006734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.006787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.006797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.006802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.006806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.006817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.016787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.016835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.016846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.016851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.016855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.016865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.026832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.026960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.026971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.026976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.026980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.026991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.036825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.036908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.036921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.036927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.036932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.036942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.046899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.046958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.046969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.046974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.046978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.046989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.056968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.057014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.057025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.057030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.057034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.057044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.066941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.066998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.067009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.067014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.067018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.067029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.076979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.077032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.077042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.077047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.077051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.077065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.086992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.087087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.087098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.087103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.087107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.087118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.097020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.097071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.097082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.097087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.097091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.097102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.107026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.107090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.107108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.107114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.107119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.107132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.117087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.117163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.117175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.117180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.117184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.117195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.127118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.127178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.557 [2024-07-22 10:55:35.127202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.557 [2024-07-22 10:55:35.127208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.557 [2024-07-22 10:55:35.127213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.557 [2024-07-22 10:55:35.127227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.557 qpair failed and we were unable to recover it. 00:39:29.557 [2024-07-22 10:55:35.137124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.557 [2024-07-22 10:55:35.137180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.137198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.137204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.137209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.137223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.147131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.147190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.147208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.147214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.147219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.147232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.157221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.157276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.157288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.157294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.157298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.157309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.167223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.167281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.167292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.167297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.167302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.167315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.177230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.177281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.177292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.177297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.177302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.177312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.187265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.187315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.187326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.187331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.187335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.187345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.197314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.197366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.197377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.197382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.197387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.197400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.207208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.207265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.207276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.207281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.207285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.207295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.217362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.217420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.217431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.217436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.217441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.217451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.227254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.227302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.227313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.227318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.227322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.227333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.237413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.237468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.237478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.237483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.237488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.237498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.558 [2024-07-22 10:55:35.247507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.558 [2024-07-22 10:55:35.247613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.558 [2024-07-22 10:55:35.247624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.558 [2024-07-22 10:55:35.247628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.558 [2024-07-22 10:55:35.247633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.558 [2024-07-22 10:55:35.247643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.558 qpair failed and we were unable to recover it. 00:39:29.820 [2024-07-22 10:55:35.257474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.820 [2024-07-22 10:55:35.257531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.820 [2024-07-22 10:55:35.257542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.820 [2024-07-22 10:55:35.257547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.820 [2024-07-22 10:55:35.257555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.820 [2024-07-22 10:55:35.257565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.820 qpair failed and we were unable to recover it. 00:39:29.820 [2024-07-22 10:55:35.267489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.820 [2024-07-22 10:55:35.267540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.820 [2024-07-22 10:55:35.267551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.820 [2024-07-22 10:55:35.267556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.820 [2024-07-22 10:55:35.267561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.820 [2024-07-22 10:55:35.267571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.820 qpair failed and we were unable to recover it. 00:39:29.820 [2024-07-22 10:55:35.277522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.820 [2024-07-22 10:55:35.277574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.820 [2024-07-22 10:55:35.277585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.820 [2024-07-22 10:55:35.277590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.820 [2024-07-22 10:55:35.277594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.820 [2024-07-22 10:55:35.277605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.820 qpair failed and we were unable to recover it. 00:39:29.820 [2024-07-22 10:55:35.287569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.287624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.287635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.287640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.287644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.287655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.297574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.297630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.297641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.297645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.297650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.297660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.307610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.307666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.307680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.307685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.307690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.307701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.317634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.317686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.317697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.317702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.317707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.317717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.327649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.327703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.327713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.327719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.327723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.327734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.337697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.337754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.337765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.337770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.337774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.337785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.347719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.347768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.347778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.347786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.347791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.347801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.357731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.357781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.357791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.357796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.357801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.357811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.367721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.367778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.367788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.367793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.367797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.367807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.377766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.377817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.377828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.377833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.377837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.377848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.387807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.387861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.387872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.387878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.387882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.387892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.397858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.397911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.397922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.397927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.397931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.397941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.407856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.821 [2024-07-22 10:55:35.407922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.821 [2024-07-22 10:55:35.407933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.821 [2024-07-22 10:55:35.407938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.821 [2024-07-22 10:55:35.407942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.821 [2024-07-22 10:55:35.407953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.821 qpair failed and we were unable to recover it. 00:39:29.821 [2024-07-22 10:55:35.417900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.417958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.417968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.417973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.417978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.417989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.427904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.427954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.427965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.427970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.427974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.427984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.437977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.438051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.438061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.438069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.438073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.438084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.447980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.448061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.448072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.448077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.448081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.448091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.458009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.458063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.458074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.458079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.458083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.458094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.468041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.468097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.468115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.468121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.468126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.468140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.478091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.478141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.478153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.478159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.478163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.478175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.488145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.488202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.488214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.488219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.488224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.488235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.498124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.498170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.498181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.498186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.498191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.498202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:29.822 [2024-07-22 10:55:35.508126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.822 [2024-07-22 10:55:35.508178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.822 [2024-07-22 10:55:35.508196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.822 [2024-07-22 10:55:35.508202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.822 [2024-07-22 10:55:35.508207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:29.822 [2024-07-22 10:55:35.508221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.822 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.518172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.518230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.518249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.518255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.518260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.518274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.528207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.528269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.528285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.528290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.528295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.528306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.538211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.538258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.538269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.538274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.538279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.538290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.548245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.548310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.548321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.548326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.548330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.548341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.558293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.558341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.558353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.558358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.558362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.558372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.568303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.568360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.568371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.568375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.568380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.568393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.578338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.578413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.578425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.578429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.578434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.578444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.588361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.588445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.588456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.588461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.084 [2024-07-22 10:55:35.588467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.084 [2024-07-22 10:55:35.588477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.084 qpair failed and we were unable to recover it. 00:39:30.084 [2024-07-22 10:55:35.598403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.084 [2024-07-22 10:55:35.598453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.084 [2024-07-22 10:55:35.598465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.084 [2024-07-22 10:55:35.598470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.598474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.598485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.608424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.608505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.608516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.608521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.608526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.608536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.618491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.618538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.618552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.618557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.618561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.618571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.628363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.628411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.628422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.628428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.628432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.628443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.638512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.638562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.638573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.638577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.638582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.638592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.648530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.648586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.648596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.648602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.648606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.648617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.658541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.658591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.658601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.658607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.658614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.658624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.668463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.668519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.668531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.668536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.668540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.668551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.678603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.678653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.678664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.678670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.678674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.678685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.688520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.688576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.688588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.688593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.688597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.688608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.698741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.698794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.698805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.698810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.698815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.085 [2024-07-22 10:55:35.698825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.085 qpair failed and we were unable to recover it. 00:39:30.085 [2024-07-22 10:55:35.708589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.085 [2024-07-22 10:55:35.708640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.085 [2024-07-22 10:55:35.708651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.085 [2024-07-22 10:55:35.708656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.085 [2024-07-22 10:55:35.708660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.708670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.718601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.718651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.718662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.718667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.718671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.718682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.728759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.728821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.728832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.728837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.728842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.728852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.738787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.738834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.738844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.738849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.738853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.738864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.748814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.748860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.748870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.748878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.748883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.748893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.758848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.758939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.758950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.758956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.758960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.758971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.768878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.768931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.768942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.768947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.768952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.768962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.086 [2024-07-22 10:55:35.778913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.086 [2024-07-22 10:55:35.778976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.086 [2024-07-22 10:55:35.778987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.086 [2024-07-22 10:55:35.778992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.086 [2024-07-22 10:55:35.778996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.086 [2024-07-22 10:55:35.779007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.086 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.788927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.788978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.788989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.788994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.788998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.789008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.798957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.799006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.799017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.799022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.799026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.799037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.808875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.808927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.808938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.808943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.808947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.808958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.818986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.819035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.819046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.819051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.819056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.819066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.829003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.829092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.829103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.829108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.829113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.829124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.839060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.839117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.839128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.839135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.839140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.839151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.849091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.849150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.849168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.849174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.849179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.849192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.859122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.859172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.859191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.859197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.859201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.859215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.869166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.869224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.869242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.869248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.869253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.869266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.879192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.879249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.879262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.879267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.879271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.879282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.889211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.889269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.889281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.889286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.889290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.889302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.347 qpair failed and we were unable to recover it. 00:39:30.347 [2024-07-22 10:55:35.899235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.347 [2024-07-22 10:55:35.899284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.347 [2024-07-22 10:55:35.899296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.347 [2024-07-22 10:55:35.899301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.347 [2024-07-22 10:55:35.899306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.347 [2024-07-22 10:55:35.899317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.909273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.909319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.909330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.909335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.909339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.909349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.919223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.919277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.919288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.919293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.919297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.919307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.929387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.929447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.929463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.929468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.929473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.929484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.939356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.939425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.939436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.939441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.939445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.939456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.949361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.949422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.949433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.949438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.949442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.949453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.959390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.959448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.959459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.959464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.959469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.959479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.969435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.969488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.969499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.969504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.969508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.969523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.979334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.979392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.979407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.979413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.979417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.979428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.989365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.989417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.989429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.989434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.989438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.989449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:35.999535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:35.999620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:35.999631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:35.999636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:35.999641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:35.999651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:36.009536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:36.009607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:36.009618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:36.009622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:36.009627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:36.009637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:36.019548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:36.019609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:36.019622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:36.019627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:36.019632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:36.019642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:36.029649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:36.029696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:36.029707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:36.029712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:36.029716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:36.029727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.348 [2024-07-22 10:55:36.039504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.348 [2024-07-22 10:55:36.039554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.348 [2024-07-22 10:55:36.039565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.348 [2024-07-22 10:55:36.039570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.348 [2024-07-22 10:55:36.039574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.348 [2024-07-22 10:55:36.039585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.348 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.049665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.049720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.049731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.049736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.049741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.049751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.059708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.059760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.059770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.059776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.059783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.059794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.069713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.069762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.069773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.069778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.069782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.069793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.079739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.079791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.079802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.079807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.079812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.079822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.089806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.089878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.089889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.089894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.089898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.089908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.099781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.099829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.099841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.099846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.099850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.099861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.109818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.109911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.109922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.109928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.109933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.109943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.119872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.119922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.119933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.119938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.119942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.119953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.129877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.129930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.129941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.129946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.129950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.129960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.139894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.139942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.139953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.139958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.139962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.139973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.149928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.149973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.149984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.149989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.149996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.150007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.159956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.160012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.609 [2024-07-22 10:55:36.160023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.609 [2024-07-22 10:55:36.160028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.609 [2024-07-22 10:55:36.160033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.609 [2024-07-22 10:55:36.160043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.609 qpair failed and we were unable to recover it. 00:39:30.609 [2024-07-22 10:55:36.169976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.609 [2024-07-22 10:55:36.170029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.170041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.170046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.170050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.170060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.179934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.180040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.180050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.180056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.180060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.180071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.190030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.190078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.190089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.190094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.190099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.190109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.200043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.200104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.200122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.200128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.200133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.200146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.210013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.210077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.210089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.210095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.210099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.210110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.220029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.220077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.220089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.220094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.220098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.220108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.230147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.230202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.230213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.230218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.230223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.230233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.240143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.240203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.240222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.240231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.240236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.240249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.250197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.250256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.250274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.250280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.250285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.250298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.260082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.260130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.260143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.260148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.260152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.260163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.270243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.270290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.270302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.270307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.270311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.270321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.280180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.280270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.280281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.280286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.280290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.280301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.290292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.290351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.290363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.290368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.290372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.610 [2024-07-22 10:55:36.290382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.610 qpair failed and we were unable to recover it. 00:39:30.610 [2024-07-22 10:55:36.300306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.610 [2024-07-22 10:55:36.300353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.610 [2024-07-22 10:55:36.300364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.610 [2024-07-22 10:55:36.300369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.610 [2024-07-22 10:55:36.300374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.611 [2024-07-22 10:55:36.300384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.611 qpair failed and we were unable to recover it. 00:39:30.871 [2024-07-22 10:55:36.310332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.871 [2024-07-22 10:55:36.310381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.871 [2024-07-22 10:55:36.310392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.871 [2024-07-22 10:55:36.310401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.871 [2024-07-22 10:55:36.310405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.871 [2024-07-22 10:55:36.310416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.871 qpair failed and we were unable to recover it. 00:39:30.871 [2024-07-22 10:55:36.320442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.871 [2024-07-22 10:55:36.320506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.871 [2024-07-22 10:55:36.320517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.871 [2024-07-22 10:55:36.320522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.871 [2024-07-22 10:55:36.320527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.871 [2024-07-22 10:55:36.320538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.871 qpair failed and we were unable to recover it. 00:39:30.871 [2024-07-22 10:55:36.330414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.871 [2024-07-22 10:55:36.330487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.871 [2024-07-22 10:55:36.330501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.330506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.330511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.330521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.340451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.340502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.340513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.340518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.340522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.340533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.350468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.350523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.350534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.350539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.350543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.350554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.360502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.360561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.360572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.360577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.360581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.360592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.370540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.370597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.370608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.370613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.370617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.370630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.380561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.380636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.380646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.380651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.380656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.380666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.390595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.390644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.390654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.390659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.390664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.390674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.400622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.400673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.400684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.400689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.400694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.400704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.410642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.410699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.410709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.410714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.410718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.410728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.420645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.420734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.420748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.420753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.420757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.420767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.430666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.430711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.430722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.430727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.430731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.430741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.440757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.440837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.440848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.440853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.440857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.440868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.450771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.450848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.450858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.450863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.450868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.450878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.460781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.460839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.460849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.460854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.460861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.460872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.470813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.470886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.470896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.470901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.470905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.470915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.480836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.480886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.480896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.480901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.480906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.480916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.490754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.490854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.490865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.490870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.490875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.490885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.500875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.500926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.500937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.500942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.500946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.500956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.510790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.510847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.510858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.510863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.510867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.510877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.520945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.520995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.521006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.521011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.872 [2024-07-22 10:55:36.521015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.872 [2024-07-22 10:55:36.521025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.872 qpair failed and we were unable to recover it. 00:39:30.872 [2024-07-22 10:55:36.530960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.872 [2024-07-22 10:55:36.531017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.872 [2024-07-22 10:55:36.531027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.872 [2024-07-22 10:55:36.531032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.873 [2024-07-22 10:55:36.531036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.873 [2024-07-22 10:55:36.531046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.873 qpair failed and we were unable to recover it. 00:39:30.873 [2024-07-22 10:55:36.541008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.873 [2024-07-22 10:55:36.541058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.873 [2024-07-22 10:55:36.541068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.873 [2024-07-22 10:55:36.541073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.873 [2024-07-22 10:55:36.541078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.873 [2024-07-22 10:55:36.541088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.873 qpair failed and we were unable to recover it. 00:39:30.873 [2024-07-22 10:55:36.551015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.873 [2024-07-22 10:55:36.551082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.873 [2024-07-22 10:55:36.551093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.873 [2024-07-22 10:55:36.551098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.873 [2024-07-22 10:55:36.551105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.873 [2024-07-22 10:55:36.551115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.873 qpair failed and we were unable to recover it. 00:39:30.873 [2024-07-22 10:55:36.561046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.873 [2024-07-22 10:55:36.561096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.873 [2024-07-22 10:55:36.561106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.873 [2024-07-22 10:55:36.561111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.873 [2024-07-22 10:55:36.561115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:30.873 [2024-07-22 10:55:36.561126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.873 qpair failed and we were unable to recover it. 00:39:31.133 [2024-07-22 10:55:36.571089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.133 [2024-07-22 10:55:36.571144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.133 [2024-07-22 10:55:36.571155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.133 [2024-07-22 10:55:36.571160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.133 [2024-07-22 10:55:36.571164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.133 [2024-07-22 10:55:36.571175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.133 qpair failed and we were unable to recover it. 00:39:31.133 [2024-07-22 10:55:36.581096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.133 [2024-07-22 10:55:36.581148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.133 [2024-07-22 10:55:36.581159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.133 [2024-07-22 10:55:36.581164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.133 [2024-07-22 10:55:36.581168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.133 [2024-07-22 10:55:36.581179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.133 qpair failed and we were unable to recover it. 00:39:31.133 [2024-07-22 10:55:36.591137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.133 [2024-07-22 10:55:36.591185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.133 [2024-07-22 10:55:36.591196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.133 [2024-07-22 10:55:36.591201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.133 [2024-07-22 10:55:36.591206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.133 [2024-07-22 10:55:36.591216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.133 qpair failed and we were unable to recover it. 00:39:31.133 [2024-07-22 10:55:36.601164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.133 [2024-07-22 10:55:36.601214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.133 [2024-07-22 10:55:36.601224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.133 [2024-07-22 10:55:36.601230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.133 [2024-07-22 10:55:36.601234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.133 [2024-07-22 10:55:36.601245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.133 qpair failed and we were unable to recover it. 00:39:31.133 [2024-07-22 10:55:36.611219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.133 [2024-07-22 10:55:36.611273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.133 [2024-07-22 10:55:36.611283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.133 [2024-07-22 10:55:36.611288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.133 [2024-07-22 10:55:36.611293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.133 [2024-07-22 10:55:36.611303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.133 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.621255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.621305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.621316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.621321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.621325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.621336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.631231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.631277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.631288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.631293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.631297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.631307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.641262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.641312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.641323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.641331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.641336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.641346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.651297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.651356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.651367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.651372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.651376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.651387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.661297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.661347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.661357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.661362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.661367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.661377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.671334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.671415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.671425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.671430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.671435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.671446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.681254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.681306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.681316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.681321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.681326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.681336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.691410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.691466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.691477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.691481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.691486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.691496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.701303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.701352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.701363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.701367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.701372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.701382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.711472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.711521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.711532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.711537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.711541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.711552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.721488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.721538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.721549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.721554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.721558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.721568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.731508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.731563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.731576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.731581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.731586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.731596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.741534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.741579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.741590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.741595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.741599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.741609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.751578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.751624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.751635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.751640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.751644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.751655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.761595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.761644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.761655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.761660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.761664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.761674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.771605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.771664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.771675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.771680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.771684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.771697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.781649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.781700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.781711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.781716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.781721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.781731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.791656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.791706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.791717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.791722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.791726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.791737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.801718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.801807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.801818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.134 [2024-07-22 10:55:36.801823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.134 [2024-07-22 10:55:36.801828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.134 [2024-07-22 10:55:36.801838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.134 qpair failed and we were unable to recover it. 00:39:31.134 [2024-07-22 10:55:36.811738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.134 [2024-07-22 10:55:36.811790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.134 [2024-07-22 10:55:36.811801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.135 [2024-07-22 10:55:36.811806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.135 [2024-07-22 10:55:36.811811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.135 [2024-07-22 10:55:36.811821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.135 qpair failed and we were unable to recover it. 00:39:31.135 [2024-07-22 10:55:36.821811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.135 [2024-07-22 10:55:36.821886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.135 [2024-07-22 10:55:36.821899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.135 [2024-07-22 10:55:36.821904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.135 [2024-07-22 10:55:36.821908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.135 [2024-07-22 10:55:36.821919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.135 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.831806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.831852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.831863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.831868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.831872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.831883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.841782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.841861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.841872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.841877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.841881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.841891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.851862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.851913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.851924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.851929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.851933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.851943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.861888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.861937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.861947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.861952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.861957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.861969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.871917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.871974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.871985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.871990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.871994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.872005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.881944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.396 [2024-07-22 10:55:36.881994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.396 [2024-07-22 10:55:36.882005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.396 [2024-07-22 10:55:36.882009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.396 [2024-07-22 10:55:36.882014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.396 [2024-07-22 10:55:36.882024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.396 qpair failed and we were unable to recover it. 00:39:31.396 [2024-07-22 10:55:36.891962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.892041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.892052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.892057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.892061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.892072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.902078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.902142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.902152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.902157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.902162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.902172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.911936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.911993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.912004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.912009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.912013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.912024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.921978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.922040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.922051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.922056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.922060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.922070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.932026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.932127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.932138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.932143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.932148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.932159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.942115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.942163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.942174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.942179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.942183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.942193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.952004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.952050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.952061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.952066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.952073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.952084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.962227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.962312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.962324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.962328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.962333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.962344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.972193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.972251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.972261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.972266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.972271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.972281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.982212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.982269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.982280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.982285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.982289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.982300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:36.992109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:36.992164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:36.992174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:36.992179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:36.992183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:36.992194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:37.002272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:37.002330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:37.002341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:37.002346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:37.002351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:37.002361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:37.012300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:37.012384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:37.012397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:37.012403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:37.012407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:37.012417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:37.022363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:37.022432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:37.022443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:37.022448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.397 [2024-07-22 10:55:37.022452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.397 [2024-07-22 10:55:37.022463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.397 qpair failed and we were unable to recover it. 00:39:31.397 [2024-07-22 10:55:37.032342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.397 [2024-07-22 10:55:37.032398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.397 [2024-07-22 10:55:37.032409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.397 [2024-07-22 10:55:37.032414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.032418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.032429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.042408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.042474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.042485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.042492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.042497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.042507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.052425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.052481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.052491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.052496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.052501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.052512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.062433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.062480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.062491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.062496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.062501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.062511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.072469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.072518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.072528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.072533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.072537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.072548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.082546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.082615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.082627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.082633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.082638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.082649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.398 [2024-07-22 10:55:37.092482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.398 [2024-07-22 10:55:37.092533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.398 [2024-07-22 10:55:37.092544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.398 [2024-07-22 10:55:37.092549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.398 [2024-07-22 10:55:37.092553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.398 [2024-07-22 10:55:37.092564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.398 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.102556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.102625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.102635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.102640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.102645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.102656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.112572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.112621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.112632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.112638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.112643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.112653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.122590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.122674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.122685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.122690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.122695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.122707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.132588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.132636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.132646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.132654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.132659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.132669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.142682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.142732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.142743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.142748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.142752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.142763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.152687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.152741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.152752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.152757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.152761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.152771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.162771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.162829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.162839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.162844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.162848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.162858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.172717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.172766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.172776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.172782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.172786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.172797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.182770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.182818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.182828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.182833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.182838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.182848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.192793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.192843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.192853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.192858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.192864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.192874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.202816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.659 [2024-07-22 10:55:37.202871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.659 [2024-07-22 10:55:37.202882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.659 [2024-07-22 10:55:37.202886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.659 [2024-07-22 10:55:37.202891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.659 [2024-07-22 10:55:37.202902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.659 qpair failed and we were unable to recover it. 00:39:31.659 [2024-07-22 10:55:37.212770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.212819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.212830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.212835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.212840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.212851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.222867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.222919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.222932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.222938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.222942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.222953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.232885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.232933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.232944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.232949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.232953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.232964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.242931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.242982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.242993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.242998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.243002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.243013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.252910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.252962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.252972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.252977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.252982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.252992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.262982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.263037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.263048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.263053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.263057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.263070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.273007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.273091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.273102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.273106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.273112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.273122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.283000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.283053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.283064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.283069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.283073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.283084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.293025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.293073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.293084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.293089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.293094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.293104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.303104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.303159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.303169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.303174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.303179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.303189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.313088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.313140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.313162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.313168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.313173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.313187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.323186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.323241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.323259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.323265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.323270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.323283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.333137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.333196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.333208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.333214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.333218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.333229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.343069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.343120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.343131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.660 [2024-07-22 10:55:37.343137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.660 [2024-07-22 10:55:37.343141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.660 [2024-07-22 10:55:37.343152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.660 qpair failed and we were unable to recover it. 00:39:31.660 [2024-07-22 10:55:37.353218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.660 [2024-07-22 10:55:37.353316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.660 [2024-07-22 10:55:37.353328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.661 [2024-07-22 10:55:37.353333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.661 [2024-07-22 10:55:37.353343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.661 [2024-07-22 10:55:37.353354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.661 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.363261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.363314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.363325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.363330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.363335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.363346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.373264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.373311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.373322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.373327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.373332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.373342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.383321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.383374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.383386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.383391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.383398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.383409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.393335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.393398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.393409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.393414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.393419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.393430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.403381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.403440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.403451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.403456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.403460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.403471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.413367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.413459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.413470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.413475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.413480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.413491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.423442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.423487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.423499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.423504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.423508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.423518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.433458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.433515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.433526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.433532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.433536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.433549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.443488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.443538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.443550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.443557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.443562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.443573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.453471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.453518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.453528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.453534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.453538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.453548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.463518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.463573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.463583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.463588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.463593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.922 [2024-07-22 10:55:37.463603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.922 qpair failed and we were unable to recover it. 00:39:31.922 [2024-07-22 10:55:37.473543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.922 [2024-07-22 10:55:37.473603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.922 [2024-07-22 10:55:37.473615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.922 [2024-07-22 10:55:37.473621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.922 [2024-07-22 10:55:37.473627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.473639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.483627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.483688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.483699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.483704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.483708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.483718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.493602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.493668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.493678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.493683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.493688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.493698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.503664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.503711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.503722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.503727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.503731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.503741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.513700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.513762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.513773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.513778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.513782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.513792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.523751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.523816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.523827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.523831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.523835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.523846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.533682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.533746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.533757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.533764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.533769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.533779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.543744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.543835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.543846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.543851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.543856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.543866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.553794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.553841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.553851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.553856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.553861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.553871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.563855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.563908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.563920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.563925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.563929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.563940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.573809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.573872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.573883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.573888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.573892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.573903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.583874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.583925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.583936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.583941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.583946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.583956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.593822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.593872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.593883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.593888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.593892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.593903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.603928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.603980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.603990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.603995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.604000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.604011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:31.923 [2024-07-22 10:55:37.613929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.923 [2024-07-22 10:55:37.613978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.923 [2024-07-22 10:55:37.613988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.923 [2024-07-22 10:55:37.613993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.923 [2024-07-22 10:55:37.613998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:31.923 [2024-07-22 10:55:37.614009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.923 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.623983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.624034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.624048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.624053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.624057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.624068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.633971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.634056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.634068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.634073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.634078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.634089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.644044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.644096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.644109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.644114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.644118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.644130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.653998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.654048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.654059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.654064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.654068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.654079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.664088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.664175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.664186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.664191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.664196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.664209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.674089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.674134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.674145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.674151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.674155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.674166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.684147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.684198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.684209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.684214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.684219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.684230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.694143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.694190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.694202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.694206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.694211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.185 [2024-07-22 10:55:37.694222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.185 qpair failed and we were unable to recover it. 00:39:32.185 [2024-07-22 10:55:37.704206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.185 [2024-07-22 10:55:37.704257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.185 [2024-07-22 10:55:37.704267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.185 [2024-07-22 10:55:37.704273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.185 [2024-07-22 10:55:37.704277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.704288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.714198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.714243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.714256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.714261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.714266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.714276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.724262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.724346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.724357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.724362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.724366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.724377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.734243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.734290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.734301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.734306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.734310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.734321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.744312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.744392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.744406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.744411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.744416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.744427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.754183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.754230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.754242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.754247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.754254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.754265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.764417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.764473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.764484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.764489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.764493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.764504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.774374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.774428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.774439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.774444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.774449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.774460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.784302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.784352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.784364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.784369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.784374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.784385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.794276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.794319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.794330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.794335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.794339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.794350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.804474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.804530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.804541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.804546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.804550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.804561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.814463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.814512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.814524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.814529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.814534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.814545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.824497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.824543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.824553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.824558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.824563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.824573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.834496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.834554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.834564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.834569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.834574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.834584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.844549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.186 [2024-07-22 10:55:37.844620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.186 [2024-07-22 10:55:37.844630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.186 [2024-07-22 10:55:37.844635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.186 [2024-07-22 10:55:37.844642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.186 [2024-07-22 10:55:37.844653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.186 qpair failed and we were unable to recover it. 00:39:32.186 [2024-07-22 10:55:37.854526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.187 [2024-07-22 10:55:37.854576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.187 [2024-07-22 10:55:37.854588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.187 [2024-07-22 10:55:37.854593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.187 [2024-07-22 10:55:37.854598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.187 [2024-07-22 10:55:37.854609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.187 qpair failed and we were unable to recover it. 00:39:32.187 [2024-07-22 10:55:37.864641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.187 [2024-07-22 10:55:37.864723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.187 [2024-07-22 10:55:37.864734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.187 [2024-07-22 10:55:37.864739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.187 [2024-07-22 10:55:37.864743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.187 [2024-07-22 10:55:37.864754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.187 qpair failed and we were unable to recover it. 00:39:32.187 [2024-07-22 10:55:37.874608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.187 [2024-07-22 10:55:37.874656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.187 [2024-07-22 10:55:37.874666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.187 [2024-07-22 10:55:37.874671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.187 [2024-07-22 10:55:37.874676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.187 [2024-07-22 10:55:37.874686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.187 qpair failed and we were unable to recover it. 00:39:32.447 [2024-07-22 10:55:37.884669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.447 [2024-07-22 10:55:37.884712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.447 [2024-07-22 10:55:37.884723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.447 [2024-07-22 10:55:37.884728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.447 [2024-07-22 10:55:37.884733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.447 [2024-07-22 10:55:37.884744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.447 qpair failed and we were unable to recover it. 00:39:32.447 [2024-07-22 10:55:37.894661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.447 [2024-07-22 10:55:37.894714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.447 [2024-07-22 10:55:37.894725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.447 [2024-07-22 10:55:37.894730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.447 [2024-07-22 10:55:37.894735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.447 [2024-07-22 10:55:37.894745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.447 qpair failed and we were unable to recover it. 00:39:32.447 [2024-07-22 10:55:37.904723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.447 [2024-07-22 10:55:37.904770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.447 [2024-07-22 10:55:37.904780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.447 [2024-07-22 10:55:37.904785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.904790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.904800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.914617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.914671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.914682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.914687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.914691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.914701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.924738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.924785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.924795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.924800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.924805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.924815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.934776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.934822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.934833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.934841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.934846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.934856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.944826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.944872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.944883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.944888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.944893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.944903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.954815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.954861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.954872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.954877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.954881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.954891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.964854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.964902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.964913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.964918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.964922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.964933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.974758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.974804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.974816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.974821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.974826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.974836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.984828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.984899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.984910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.984915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.984920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.984931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:37.994948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:37.995002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:37.995014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:37.995019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:37.995023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:37.995033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:38.004985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:38.005032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:38.005043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:38.005048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:38.005052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:38.005063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:38.014990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:38.015042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:38.015052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:38.015057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:38.015062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:38.015072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:38.025027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:38.025070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:38.025084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:38.025089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:38.025093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:38.025103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:38.035035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:38.035077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:38.035090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:38.035095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:38.035100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:38.035111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.448 [2024-07-22 10:55:38.045051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.448 [2024-07-22 10:55:38.045093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.448 [2024-07-22 10:55:38.045104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.448 [2024-07-22 10:55:38.045109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.448 [2024-07-22 10:55:38.045113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.448 [2024-07-22 10:55:38.045123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.448 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.055099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.055148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.055160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.055165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.055169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.055179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.065041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.065087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.065098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.065103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.065107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.065123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.075167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.075211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.075221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.075226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.075231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.075241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.085058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.085102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.085113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.085118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.085122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.085133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.095177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.095250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.095261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.095266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.095270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.095281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.105283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.105329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.105340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.105345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.105349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.105359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.115255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.115313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.115327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.115332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.115336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.115346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.125270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.125315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.125325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.125331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.125336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.125346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.449 [2024-07-22 10:55:38.135316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.449 [2024-07-22 10:55:38.135364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.449 [2024-07-22 10:55:38.135374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.449 [2024-07-22 10:55:38.135379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.449 [2024-07-22 10:55:38.135384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.449 [2024-07-22 10:55:38.135397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.449 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.145350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.145401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.145413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.145418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.145423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.145433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.155379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.155425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.155436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.155440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.155449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.155459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.165370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.165418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.165429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.165434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.165438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.165449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.175432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.175475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.175486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.175491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.175495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.175505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.185507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.185572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.185583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.185588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.185592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.185602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.195484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.195543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.195554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.195559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.195563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.195573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.205508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.205555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.205566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.710 [2024-07-22 10:55:38.205571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.710 [2024-07-22 10:55:38.205575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.710 [2024-07-22 10:55:38.205585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.710 qpair failed and we were unable to recover it. 00:39:32.710 [2024-07-22 10:55:38.215591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.710 [2024-07-22 10:55:38.215662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.710 [2024-07-22 10:55:38.215673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.215678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.215682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.215692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.225542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.225588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.225599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.225603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.225608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.225618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.235586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.235625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.235636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.235641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.235646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.235656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.245653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.245717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.245728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.245733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.245740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.245751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.255631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.255719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.255729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.255735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.255739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.255749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.265531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.265573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.265583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.265588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.265593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.265603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.275658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.275703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.275714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.275719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.275723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.275734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.285692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.285736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.285746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.285751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.285756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.285766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.295613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.295661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.295672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.295677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.295682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.295692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.305636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.305686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.305697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.305702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.305706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.305717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.315797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.315839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.315850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.315855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.315859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.315869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.325828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.325871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.325882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.325887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.325892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.325902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.335719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.335771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.335781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.335789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.335794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.335804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.345742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.345804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.345815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.345820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.345824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.345834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.355898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.355939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.355950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.355955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.355959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.355969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.365800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.365844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.365855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.365860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.365864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.365875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.375960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.376006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.376017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.376022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.376026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.376036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.385953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.385994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.386005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.386011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.386015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.386025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.396000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.396042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.396053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.396058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.396063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.396072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.711 [2024-07-22 10:55:38.406037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.711 [2024-07-22 10:55:38.406087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.711 [2024-07-22 10:55:38.406097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.711 [2024-07-22 10:55:38.406102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.711 [2024-07-22 10:55:38.406106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.711 [2024-07-22 10:55:38.406117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.711 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.416056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.416102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.416113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.416118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.416122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.416133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.426149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.426207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.426220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.426225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.426230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.426240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.436125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.436164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.436175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.436180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.436184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.436194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.446147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.446190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.446200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.446205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.446210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.446220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.456161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.456214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.456232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.456238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.456243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.456257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.466213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.466281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.466294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.466299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.466304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.466318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.476224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.476278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.476290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.476295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.476299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.476309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.486247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.486301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.486312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.486317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.486321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.486332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.496298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.496356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.496367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.496372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.496376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.496386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.506324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.506375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.506387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.973 [2024-07-22 10:55:38.506391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.973 [2024-07-22 10:55:38.506399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.973 [2024-07-22 10:55:38.506410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.973 qpair failed and we were unable to recover it. 00:39:32.973 [2024-07-22 10:55:38.516342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.973 [2024-07-22 10:55:38.516387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.973 [2024-07-22 10:55:38.516402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.516408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.516412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.516422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.526391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.526437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.526447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.526452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.526457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.526467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.536254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.536303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.536315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.536320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.536324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.536335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.546278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.546345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.546356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.546361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.546365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.546376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.556427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.556466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.556477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.556482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.556486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.556499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.566523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.566569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.566580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.566585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.566590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.566600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.576372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.576424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.576435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.576441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.576445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.576455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.586539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.586621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.586632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.586638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.586642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.586653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.596467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.596509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.596520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.596525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.596529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.596539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.606451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.606497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.606508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.606513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.606517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.606528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.616620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.616668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.616678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.616683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.616688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.616698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.626685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.626762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.626773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.626779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.626783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.626793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.636669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.636716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.636727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.636732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.636736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.636746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.646693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.646736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.646747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.646752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.646759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.646769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.974 qpair failed and we were unable to recover it. 00:39:32.974 [2024-07-22 10:55:38.656722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.974 [2024-07-22 10:55:38.656772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.974 [2024-07-22 10:55:38.656783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.974 [2024-07-22 10:55:38.656788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.974 [2024-07-22 10:55:38.656792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.974 [2024-07-22 10:55:38.656803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.975 qpair failed and we were unable to recover it. 00:39:32.975 [2024-07-22 10:55:38.666737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.975 [2024-07-22 10:55:38.666815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.975 [2024-07-22 10:55:38.666826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.975 [2024-07-22 10:55:38.666831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.975 [2024-07-22 10:55:38.666836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:32.975 [2024-07-22 10:55:38.666846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.975 qpair failed and we were unable to recover it. 00:39:33.236 [2024-07-22 10:55:38.676766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.236 [2024-07-22 10:55:38.676806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.236 [2024-07-22 10:55:38.676817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.236 [2024-07-22 10:55:38.676822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.236 [2024-07-22 10:55:38.676826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.236 [2024-07-22 10:55:38.676837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.236 qpair failed and we were unable to recover it. 00:39:33.236 [2024-07-22 10:55:38.686791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.236 [2024-07-22 10:55:38.686848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.686858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.686864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.686868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.686878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.696816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.696867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.696878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.696883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.696887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.696898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.706828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.706870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.706880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.706885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.706890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.706900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.716859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.716906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.716917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.716922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.716926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.716937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.726892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.726936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.726947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.726952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.726956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.726967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.736926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.736974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.736985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.736993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.736997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.737008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.746891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.746936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.746947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.746952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.746957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.746967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.756977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.757019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.757030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.757035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.757039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.757049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.767009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.767055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.767066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.767070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.767075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.767085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.777045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.777127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.777138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.777143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.777147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.777157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.787066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.787105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.787116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.787121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.787125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.787135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.797092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.797133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.797144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.797149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.797153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.797163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.807088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.807132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.807143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.807148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.807152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.807162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.817154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.817205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.817216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.817221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.237 [2024-07-22 10:55:38.817225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.237 [2024-07-22 10:55:38.817235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.237 qpair failed and we were unable to recover it. 00:39:33.237 [2024-07-22 10:55:38.827225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.237 [2024-07-22 10:55:38.827298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.237 [2024-07-22 10:55:38.827309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.237 [2024-07-22 10:55:38.827317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.827321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.827332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.837165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.837212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.837223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.837228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.837233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.837243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.847180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.847229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.847240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.847245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.847249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.847259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.857260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.857311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.857322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.857327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.857332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.857342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.867272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.867315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.867327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.867332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.867337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.867347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.877297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.877355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.877366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.877371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.877376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.877386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.887383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.887451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.887462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.887466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.887471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.887482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.897353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.897403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.897414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.897419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.897424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.897435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.907372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.907415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.907426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.907431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.907436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.907446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.917406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.917451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.917466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.917471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.917475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.917486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.238 [2024-07-22 10:55:38.927324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.238 [2024-07-22 10:55:38.927373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.238 [2024-07-22 10:55:38.927384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.238 [2024-07-22 10:55:38.927389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.238 [2024-07-22 10:55:38.927397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.238 [2024-07-22 10:55:38.927408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.238 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.937457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.937502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.937514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.937519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.937523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.937533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.947487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.947530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.947542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.947547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.947552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.947563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.957382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.957430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.957441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.957447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.957451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.957465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.967538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.967584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.967595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.967600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.967605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.967616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.977570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.977615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.977626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.977631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.977635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.977646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.987605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.987649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.987660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.987665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.987670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.987680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:38.997617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:38.997660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:38.997670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:38.997675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:38.997680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:38.997690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.007639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.007681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.007694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.007700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.007704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.007715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.017696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.017783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.017794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.017799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.017805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.017815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.027687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.027731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.027742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.027747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.027751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.027762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.037616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.037671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.037681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.037686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.037691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.037702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.047670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.047761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.047772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.047777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.047785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.047795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.057637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.057691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.057702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.057707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.057711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.057722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.067792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.067837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.501 [2024-07-22 10:55:39.067848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.501 [2024-07-22 10:55:39.067853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.501 [2024-07-22 10:55:39.067858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.501 [2024-07-22 10:55:39.067868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.501 qpair failed and we were unable to recover it. 00:39:33.501 [2024-07-22 10:55:39.077824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.501 [2024-07-22 10:55:39.077871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.077881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.077886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.077891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.077901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.087859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.087904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.087914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.087919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.087923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.087934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.097851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.097904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.097915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.097920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.097924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.097934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.107931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.108018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.108029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.108035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.108040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.108050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.117936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.118013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.118024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.118029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.118033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.118043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.127963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.128008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.128019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.128024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.128028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.128039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.137984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.138030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.138041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.138048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.138053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.138063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.148009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.148050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.148061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.148066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.148070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.148080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.158045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.158088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.158100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.158105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.158110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.158121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.168064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.168119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.168137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.168143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.168149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.168162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.178092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.178139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.178151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.178156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.178161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.178172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.188162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.188240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.188258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.188265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.188270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.188285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.502 [2024-07-22 10:55:39.198161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.502 [2024-07-22 10:55:39.198203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.502 [2024-07-22 10:55:39.198215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.502 [2024-07-22 10:55:39.198220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.502 [2024-07-22 10:55:39.198225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.502 [2024-07-22 10:55:39.198237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.502 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.208186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.208230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.208241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.208247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.208251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.208262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.218069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.218117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.218128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.218134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.218138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.218151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.228229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.228276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.228287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.228296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.228300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.228311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.238301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.238391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.238406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.238411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.238415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.238426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.248151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.248196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.248207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.248212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.248216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.248227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.258316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.258369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.258380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.258385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.258389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.258403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.268225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.268284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.268295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.268300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.268304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.268314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.278320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.278369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.278380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.278385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.278389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.278403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.288387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.288432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.288443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.288448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.288452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.288463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.298402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.298452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.298463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.298468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.298473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.298483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.308422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.308469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.308480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.308486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.308490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.308501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.318475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.318563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.318576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.318581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.765 [2024-07-22 10:55:39.318586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.765 [2024-07-22 10:55:39.318597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.765 qpair failed and we were unable to recover it. 00:39:33.765 [2024-07-22 10:55:39.328512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.765 [2024-07-22 10:55:39.328558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.765 [2024-07-22 10:55:39.328569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.765 [2024-07-22 10:55:39.328574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.328579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.328590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.338496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.338544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.338555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.338560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.338565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.338575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.348561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.348610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.348620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.348625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.348630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.348641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.358597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.358638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.358648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.358653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.358658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.358671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.368481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.368524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.368535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.368540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.368545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.368555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.378501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.378550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.378561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.378566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.378571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.378582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.388635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.388681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.388692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.388697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.388702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.388713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.398553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.398596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.398607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.398612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.398616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.398627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.408688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.408733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.408746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.408751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.408755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.408765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.418746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.418794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.418804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.418809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.418814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.418824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.428639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.428683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.428693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.428698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.428702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.428713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.438781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.438826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.438837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.438842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.438846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.438856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.448738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.448779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.448789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.448794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.448802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.448812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:33.766 [2024-07-22 10:55:39.458717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.766 [2024-07-22 10:55:39.458763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.766 [2024-07-22 10:55:39.458774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.766 [2024-07-22 10:55:39.458779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.766 [2024-07-22 10:55:39.458784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:33.766 [2024-07-22 10:55:39.458795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.766 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.468863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.468950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.468962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.468967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.468971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.468982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.478824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.478901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.478912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.478917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.478921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.478932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.488893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.488934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.488945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.488950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.488955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.488966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.498956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.499043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.499054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.499059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.499064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.499074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.509045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.509092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.509103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.509108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.509112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.509122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.518994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.519041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.519052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.519057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.519062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.519072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.529072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.529115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.529126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.529131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.529135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.529145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.539076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.539156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.539167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.539172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.539179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.539190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.549134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.549175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.549186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.549191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.549195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.549205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.559118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.559159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.559170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.559175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.559180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.559190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.569140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.569184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.569194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.569199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.569204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.569214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.579170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.579214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.579225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.030 [2024-07-22 10:55:39.579230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.030 [2024-07-22 10:55:39.579234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.030 [2024-07-22 10:55:39.579245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.030 qpair failed and we were unable to recover it. 00:39:34.030 [2024-07-22 10:55:39.589194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.030 [2024-07-22 10:55:39.589240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.030 [2024-07-22 10:55:39.589252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.589258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.589262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.589273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.599226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.599273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.599285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.599289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.599294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.599305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.609260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.609352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.609364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.609369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.609373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.609384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.619284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.619332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.619343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.619348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.619352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.619363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.629315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.629374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.629385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.629398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.629403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.629413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.639336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.639384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.639397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.639403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.639407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.639417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.649363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.649409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.649420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.649425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.649430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.649440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.659391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.659438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.659448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.659453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.659458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.659468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.669287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.669334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.669345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.669350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.669355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.669365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.679492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.679556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.679567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.679572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.679577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.679587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.689471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.689514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.689525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.689530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.689534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.689545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.699392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.699450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.699460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.699465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.699470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.699480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.709383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.709428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.709440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.709445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.709450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.709461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.031 [2024-07-22 10:55:39.719543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.031 [2024-07-22 10:55:39.719625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.031 [2024-07-22 10:55:39.719639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.031 [2024-07-22 10:55:39.719645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.031 [2024-07-22 10:55:39.719649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.031 [2024-07-22 10:55:39.719659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.031 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.729626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.729671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.729681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.729686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.295 [2024-07-22 10:55:39.729690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.295 [2024-07-22 10:55:39.729700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.295 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.739646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.739731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.739742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.739748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.295 [2024-07-22 10:55:39.739752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.295 [2024-07-22 10:55:39.739763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.295 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.749667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.749747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.749757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.749762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.295 [2024-07-22 10:55:39.749767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.295 [2024-07-22 10:55:39.749778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.295 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.759675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.759717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.759728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.759733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.295 [2024-07-22 10:55:39.759737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.295 [2024-07-22 10:55:39.759750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.295 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.769672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.769718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.769729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.769734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.295 [2024-07-22 10:55:39.769738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.295 [2024-07-22 10:55:39.769748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.295 qpair failed and we were unable to recover it. 00:39:34.295 [2024-07-22 10:55:39.779728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.295 [2024-07-22 10:55:39.779790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.295 [2024-07-22 10:55:39.779801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.295 [2024-07-22 10:55:39.779806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.779810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.779820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.789736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.789784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.789795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.789800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.789804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.789814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.799745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.799786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.799797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.799802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.799807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.799816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.809797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.809838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.809851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.809857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.809861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.809871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.819774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.819821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.819831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.819836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.819840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.819851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.829844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.829891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.829902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.829906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.829911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.829921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.839875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.839944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.839954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.839959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.839964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.839974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.849922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.849979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.849989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.849994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.850002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.850012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.859936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.859984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.859995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.859999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.860004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.860014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.869957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.869996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.870007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.870012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.870016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.870026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.879987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.880028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.880038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.880043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.880048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.880058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.890010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.890054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.890065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.890070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.890074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.890085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.900037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.900090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.900109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.900115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.900120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.900133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.910066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.910159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.296 [2024-07-22 10:55:39.910178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.296 [2024-07-22 10:55:39.910184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.296 [2024-07-22 10:55:39.910189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.296 [2024-07-22 10:55:39.910202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.296 qpair failed and we were unable to recover it. 00:39:34.296 [2024-07-22 10:55:39.920075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.296 [2024-07-22 10:55:39.920118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.920130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.920135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.920140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.920151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.930130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.930172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.930184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.930189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.930193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.930204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.940145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.940192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.940203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.940208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.940216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.940226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.950156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.950198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.950209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.950214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.950218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.950229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.960195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.960238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.960249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.960254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.960258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.960268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.970219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.970274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.970285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.970290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.970295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.970306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.980272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.980318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.980329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.980334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.980339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.980349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.297 [2024-07-22 10:55:39.990286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.297 [2024-07-22 10:55:39.990330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.297 [2024-07-22 10:55:39.990341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.297 [2024-07-22 10:55:39.990346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.297 [2024-07-22 10:55:39.990351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.297 [2024-07-22 10:55:39.990362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.297 qpair failed and we were unable to recover it. 00:39:34.559 [2024-07-22 10:55:40.000270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.559 [2024-07-22 10:55:40.000315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.559 [2024-07-22 10:55:40.000326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.559 [2024-07-22 10:55:40.000331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.559 [2024-07-22 10:55:40.000335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.559 [2024-07-22 10:55:40.000346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.559 qpair failed and we were unable to recover it. 00:39:34.559 [2024-07-22 10:55:40.010315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.559 [2024-07-22 10:55:40.010358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.559 [2024-07-22 10:55:40.010370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.559 [2024-07-22 10:55:40.010376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.559 [2024-07-22 10:55:40.010381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.559 [2024-07-22 10:55:40.010392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.559 qpair failed and we were unable to recover it. 00:39:34.559 [2024-07-22 10:55:40.020369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.559 [2024-07-22 10:55:40.020423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.559 [2024-07-22 10:55:40.020434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.559 [2024-07-22 10:55:40.020440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.559 [2024-07-22 10:55:40.020445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1658000b90 00:39:34.559 [2024-07-22 10:55:40.020455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.559 qpair failed and we were unable to recover it. 00:39:34.559 [2024-07-22 10:55:40.020578] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:39:34.559 A controller has encountered a failure and is being reset. 00:39:34.559 [2024-07-22 10:55:40.020694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c17820 (9): Bad file descriptor 00:39:34.559 Controller properly reset. 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 [2024-07-22 10:55:40.036134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Read completed with error (sct=0, sc=8) 00:39:34.559 starting I/O failed 00:39:34.559 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Read completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 Write completed with error (sct=0, sc=8) 00:39:34.560 starting I/O failed 00:39:34.560 [2024-07-22 10:55:40.041340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:34.560 Initializing NVMe Controllers 00:39:34.560 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:34.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:34.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:34.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:34.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:34.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:34.560 Initialization complete. Launching workers. 00:39:34.560 Starting thread on core 1 00:39:34.560 Starting thread on core 2 00:39:34.560 Starting thread on core 3 00:39:34.560 Starting thread on core 0 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:34.560 00:39:34.560 real 0m11.353s 00:39:34.560 user 0m21.270s 00:39:34.560 sys 0m3.606s 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:34.560 ************************************ 00:39:34.560 END TEST nvmf_target_disconnect_tc2 00:39:34.560 ************************************ 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:34.560 rmmod nvme_tcp 00:39:34.560 rmmod nvme_fabrics 00:39:34.560 rmmod nvme_keyring 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2265835 ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2265835 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2265835 ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2265835 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2265835 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2265835' 00:39:34.560 killing process with pid 2265835 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2265835 00:39:34.560 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2265835 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:34.821 10:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.739 10:55:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:36.739 00:39:36.739 real 0m22.115s 00:39:36.739 user 0m49.231s 00:39:36.739 sys 0m9.987s 00:39:36.739 10:55:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.739 10:55:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:36.739 ************************************ 00:39:36.739 END TEST nvmf_target_disconnect 00:39:36.739 ************************************ 00:39:36.998 10:55:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:36.998 10:55:42 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:39:36.998 10:55:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:36.998 10:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.998 10:55:42 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:39:36.998 00:39:36.998 real 31m20.616s 00:39:36.998 user 77m15.123s 00:39:36.998 sys 8m46.830s 00:39:36.998 10:55:42 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.998 10:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.998 ************************************ 00:39:36.998 END TEST nvmf_tcp 00:39:36.998 ************************************ 00:39:36.998 10:55:42 -- common/autotest_common.sh@1142 -- # return 0 00:39:36.998 10:55:42 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:39:36.998 10:55:42 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:36.998 10:55:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:36.998 10:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:36.998 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:39:36.998 ************************************ 00:39:36.998 START TEST spdkcli_nvmf_tcp 00:39:36.998 ************************************ 00:39:36.998 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:36.998 * Looking for test storage... 00:39:36.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:36.998 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:36.998 10:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:36.998 10:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:36.998 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.261 10:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2267697 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2267697 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2267697 ']' 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:37.262 10:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.262 [2024-07-22 10:55:42.783969] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:39:37.262 [2024-07-22 10:55:42.784020] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267697 ] 00:39:37.262 EAL: No free 2048 kB hugepages reported on node 1 00:39:37.262 [2024-07-22 10:55:42.850164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:37.262 [2024-07-22 10:55:42.883746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.262 [2024-07-22 10:55:42.883749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.872 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:37.872 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:39:37.872 10:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:37.872 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:37.872 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.131 10:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:38.131 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:38.131 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:38.131 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:38.131 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:38.131 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:38.131 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:38.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:38.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:38.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:38.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:38.131 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:38.131 ' 00:39:40.678 [2024-07-22 10:55:45.923063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.617 [2024-07-22 10:55:47.086773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:43.528 [2024-07-22 10:55:49.225063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:45.437 [2024-07-22 10:55:51.058616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:46.819 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:46.819 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:46.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:46.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:46.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:46.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:46.819 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:47.079 10:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:47.346 10:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:47.346 10:55:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:47.346 10:55:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:47.346 10:55:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:47.346 10:55:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.612 10:55:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:47.612 10:55:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:47.612 10:55:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.612 10:55:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:47.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:47.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:47.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:47.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:47.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:47.612 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:47.612 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:47.612 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:47.612 ' 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:52.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:52.886 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:52.886 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:52.886 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2267697 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2267697 ']' 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2267697 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:52.886 10:55:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2267697 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2267697' 00:39:52.886 killing process with pid 2267697 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2267697 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2267697 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2267697 ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2267697 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2267697 ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2267697 00:39:52.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2267697) - No such process 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2267697 is not found' 00:39:52.886 Process with pid 2267697 is not found 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:52.886 00:39:52.886 real 0m15.531s 00:39:52.886 user 0m32.051s 00:39:52.886 sys 0m0.676s 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:52.886 10:55:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:52.886 ************************************ 00:39:52.886 END TEST spdkcli_nvmf_tcp 00:39:52.886 ************************************ 00:39:52.886 10:55:58 -- common/autotest_common.sh@1142 -- # return 0 00:39:52.886 10:55:58 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:52.886 10:55:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:52.886 10:55:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:52.886 10:55:58 -- common/autotest_common.sh@10 -- # set +x 00:39:52.886 ************************************ 00:39:52.886 START TEST nvmf_identify_passthru 00:39:52.886 ************************************ 00:39:52.886 10:55:58 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:52.886 * Looking for test storage... 00:39:52.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:52.886 10:55:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.886 10:55:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.886 10:55:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.886 10:55:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.886 10:55:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:52.886 10:55:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:52.886 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:52.886 10:55:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.886 10:55:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.887 10:55:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.887 10:55:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.887 10:55:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.887 10:55:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:52.887 10:55:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.887 10:55:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.887 10:55:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:52.887 10:55:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:52.887 10:55:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:39:52.887 10:55:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:01.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:01.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:01.019 Found net devices under 0000:31:00.0: cvl_0_0 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:01.019 Found net devices under 0000:31:00.1: cvl_0_1 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:01.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:01.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:40:01.019 00:40:01.019 --- 10.0.0.2 ping statistics --- 00:40:01.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.019 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:01.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:01.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:40:01.019 00:40:01.019 --- 10.0.0.1 ping statistics --- 00:40:01.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.019 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:01.019 10:56:06 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:01.019 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:01.019 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:40:01.020 10:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:01.020 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:01.020 EAL: No free 2048 kB hugepages reported on node 1 00:40:01.280 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:40:01.541 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:01.541 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:01.541 10:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:01.541 EAL: No free 2048 kB hugepages reported on node 1 00:40:01.802 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:01.802 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:01.802 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:01.802 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:01.802 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:01.802 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:01.802 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:02.062 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2275018 00:40:02.062 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:02.062 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:02.062 10:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2275018 00:40:02.062 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2275018 ']' 00:40:02.062 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.062 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:02.063 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.063 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:02.063 10:56:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:02.063 [2024-07-22 10:56:07.560998] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:40:02.063 [2024-07-22 10:56:07.561076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.063 EAL: No free 2048 kB hugepages reported on node 1 00:40:02.063 [2024-07-22 10:56:07.639168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:02.063 [2024-07-22 10:56:07.676053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:02.063 [2024-07-22 10:56:07.676094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:02.063 [2024-07-22 10:56:07.676102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:02.063 [2024-07-22 10:56:07.676108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:02.063 [2024-07-22 10:56:07.676114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:02.063 [2024-07-22 10:56:07.676261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.063 [2024-07-22 10:56:07.676403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:02.063 [2024-07-22 10:56:07.676461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:02.063 [2024-07-22 10:56:07.676615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:40:03.001 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.001 INFO: Log level set to 20 00:40:03.001 INFO: Requests: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "method": "nvmf_set_config", 00:40:03.001 "id": 1, 00:40:03.001 "params": { 00:40:03.001 "admin_cmd_passthru": { 00:40:03.001 "identify_ctrlr": true 00:40:03.001 } 00:40:03.001 } 00:40:03.001 } 00:40:03.001 00:40:03.001 INFO: response: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "id": 1, 00:40:03.001 "result": true 00:40:03.001 } 00:40:03.001 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.001 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.001 INFO: Setting log level to 20 00:40:03.001 INFO: Setting log level to 20 00:40:03.001 INFO: Log level set to 20 00:40:03.001 INFO: Log level set to 20 00:40:03.001 INFO: Requests: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "method": "framework_start_init", 00:40:03.001 "id": 1 00:40:03.001 } 00:40:03.001 00:40:03.001 INFO: Requests: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "method": "framework_start_init", 00:40:03.001 "id": 1 00:40:03.001 } 00:40:03.001 00:40:03.001 [2024-07-22 10:56:08.409812] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:03.001 INFO: response: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "id": 1, 00:40:03.001 "result": true 00:40:03.001 } 00:40:03.001 00:40:03.001 INFO: response: 00:40:03.001 { 00:40:03.001 "jsonrpc": "2.0", 00:40:03.001 "id": 1, 00:40:03.001 "result": true 00:40:03.001 } 00:40:03.001 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.001 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.001 INFO: Setting log level to 40 00:40:03.001 INFO: Setting log level to 40 00:40:03.001 INFO: Setting log level to 40 00:40:03.001 [2024-07-22 10:56:08.423106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.001 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.001 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:03.001 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.002 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.262 Nvme0n1 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.262 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.262 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.262 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.262 [2024-07-22 10:56:08.805600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.262 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.262 [ 00:40:03.262 { 00:40:03.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:03.262 "subtype": "Discovery", 00:40:03.262 "listen_addresses": [], 00:40:03.262 "allow_any_host": true, 00:40:03.262 "hosts": [] 00:40:03.262 }, 00:40:03.262 { 00:40:03.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:03.262 "subtype": "NVMe", 00:40:03.262 "listen_addresses": [ 00:40:03.262 { 00:40:03.262 "trtype": "TCP", 00:40:03.262 "adrfam": "IPv4", 00:40:03.262 "traddr": "10.0.0.2", 00:40:03.262 "trsvcid": "4420" 00:40:03.262 } 00:40:03.262 ], 00:40:03.262 "allow_any_host": true, 00:40:03.262 "hosts": [], 00:40:03.262 "serial_number": "SPDK00000000000001", 00:40:03.262 "model_number": "SPDK bdev Controller", 00:40:03.262 "max_namespaces": 1, 00:40:03.262 "min_cntlid": 1, 00:40:03.262 "max_cntlid": 65519, 00:40:03.262 "namespaces": [ 00:40:03.262 { 00:40:03.262 "nsid": 1, 00:40:03.262 "bdev_name": "Nvme0n1", 00:40:03.262 "name": "Nvme0n1", 00:40:03.262 "nguid": "3634473052605494002538450000002B", 00:40:03.262 "uuid": "36344730-5260-5494-0025-38450000002b" 00:40:03.262 } 00:40:03.262 ] 00:40:03.262 } 00:40:03.262 ] 00:40:03.262 10:56:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.262 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:03.263 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:03.263 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:03.263 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.523 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:40:03.523 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:03.523 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:03.523 10:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:03.523 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.523 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:03.523 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:40:03.523 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:03.523 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:03.523 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.523 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:03.784 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.784 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:03.784 10:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:03.784 rmmod nvme_tcp 00:40:03.784 rmmod nvme_fabrics 00:40:03.784 rmmod nvme_keyring 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2275018 ']' 00:40:03.784 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2275018 00:40:03.784 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2275018 ']' 00:40:03.784 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2275018 00:40:03.784 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2275018 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2275018' 00:40:03.785 killing process with pid 2275018 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2275018 00:40:03.785 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2275018 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:04.046 10:56:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.046 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:04.046 10:56:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.960 10:56:11 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:06.222 00:40:06.222 real 0m13.461s 00:40:06.222 user 0m10.245s 00:40:06.222 sys 0m6.671s 00:40:06.222 10:56:11 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:06.222 10:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:06.222 ************************************ 00:40:06.222 END TEST nvmf_identify_passthru 00:40:06.222 ************************************ 00:40:06.222 10:56:11 -- common/autotest_common.sh@1142 -- # return 0 00:40:06.222 10:56:11 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:06.222 10:56:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:06.222 10:56:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:06.222 10:56:11 -- common/autotest_common.sh@10 -- # set +x 00:40:06.222 ************************************ 00:40:06.222 START TEST nvmf_dif 00:40:06.222 ************************************ 00:40:06.222 10:56:11 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:06.222 * Looking for test storage... 00:40:06.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:06.222 10:56:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:06.222 10:56:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:06.222 10:56:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:06.222 10:56:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.222 10:56:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.222 10:56:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.222 10:56:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:06.222 10:56:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:06.222 10:56:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.222 10:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:06.222 10:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:06.222 10:56:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:40:06.222 10:56:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:14.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:14.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:14.357 Found net devices under 0000:31:00.0: cvl_0_0 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:14.357 Found net devices under 0000:31:00.1: cvl_0_1 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:14.357 10:56:19 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:14.357 10:56:20 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:14.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:14.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:40:14.617 00:40:14.617 --- 10.0.0.2 ping statistics --- 00:40:14.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.617 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:14.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:14.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:40:14.617 00:40:14.617 --- 10.0.0.1 ping statistics --- 00:40:14.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.617 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:14.617 10:56:20 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:17.909 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:17.909 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:17.909 10:56:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:17.909 10:56:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2281659 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2281659 00:40:17.909 10:56:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2281659 ']' 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:17.909 10:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:17.909 [2024-07-22 10:56:23.482307] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:40:17.909 [2024-07-22 10:56:23.482354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.909 EAL: No free 2048 kB hugepages reported on node 1 00:40:17.909 [2024-07-22 10:56:23.553543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.909 [2024-07-22 10:56:23.583970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.909 [2024-07-22 10:56:23.584006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.909 [2024-07-22 10:56:23.584013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.909 [2024-07-22 10:56:23.584020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.909 [2024-07-22 10:56:23.584026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.909 [2024-07-22 10:56:23.584045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:40:18.848 10:56:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 10:56:24 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:18.848 10:56:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:18.848 10:56:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 [2024-07-22 10:56:24.296167] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.848 10:56:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 ************************************ 00:40:18.848 START TEST fio_dif_1_default 00:40:18.848 ************************************ 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 bdev_null0 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:18.849 [2024-07-22 10:56:24.384513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:18.849 { 00:40:18.849 "params": { 00:40:18.849 "name": "Nvme$subsystem", 00:40:18.849 "trtype": "$TEST_TRANSPORT", 00:40:18.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.849 "adrfam": "ipv4", 00:40:18.849 "trsvcid": "$NVMF_PORT", 00:40:18.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.849 "hdgst": ${hdgst:-false}, 00:40:18.849 "ddgst": ${ddgst:-false} 00:40:18.849 }, 00:40:18.849 "method": "bdev_nvme_attach_controller" 00:40:18.849 } 00:40:18.849 EOF 00:40:18.849 )") 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:18.849 "params": { 00:40:18.849 "name": "Nvme0", 00:40:18.849 "trtype": "tcp", 00:40:18.849 "traddr": "10.0.0.2", 00:40:18.849 "adrfam": "ipv4", 00:40:18.849 "trsvcid": "4420", 00:40:18.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:18.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:18.849 "hdgst": false, 00:40:18.849 "ddgst": false 00:40:18.849 }, 00:40:18.849 "method": "bdev_nvme_attach_controller" 00:40:18.849 }' 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:18.849 10:56:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:19.106 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:19.106 fio-3.35 00:40:19.106 Starting 1 thread 00:40:19.365 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.597 00:40:31.597 filename0: (groupid=0, jobs=1): err= 0: pid=2282191: Mon Jul 22 10:56:35 2024 00:40:31.597 read: IOPS=186, BW=747KiB/s (765kB/s)(7472KiB/10006msec) 00:40:31.597 slat (nsec): min=5406, max=31584, avg=6102.83, stdev=1343.72 00:40:31.597 clat (usec): min=695, max=42780, avg=21409.15, stdev=20412.03 00:40:31.597 lat (usec): min=701, max=42812, avg=21415.26, stdev=20412.04 00:40:31.597 clat percentiles (usec): 00:40:31.597 | 1.00th=[ 848], 5.00th=[ 930], 10.00th=[ 938], 20.00th=[ 963], 00:40:31.597 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[41157], 60.00th=[41157], 00:40:31.597 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:31.597 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:31.597 | 99.99th=[42730] 00:40:31.597 bw ( KiB/s): min= 704, max= 768, per=99.77%, avg=745.60, stdev=29.55, samples=20 00:40:31.597 iops : min= 176, max= 192, avg=186.40, stdev= 7.39, samples=20 00:40:31.597 lat (usec) : 750=0.21%, 1000=45.88% 00:40:31.597 lat (msec) : 2=3.80%, 50=50.11% 00:40:31.597 cpu : usr=94.72%, sys=5.09%, ctx=14, majf=0, minf=266 00:40:31.597 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.597 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.597 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:31.597 00:40:31.597 Run status group 0 (all jobs): 00:40:31.597 READ: bw=747KiB/s (765kB/s), 747KiB/s-747KiB/s (765kB/s-765kB/s), io=7472KiB (7651kB), run=10006-10006msec 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.597 00:40:31.597 real 0m11.136s 00:40:31.597 user 0m25.919s 00:40:31.597 sys 0m0.792s 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.597 10:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:31.597 ************************************ 00:40:31.597 END TEST fio_dif_1_default 00:40:31.597 ************************************ 00:40:31.598 10:56:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:31.598 10:56:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:31.598 10:56:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:31.598 10:56:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 ************************************ 00:40:31.598 START TEST fio_dif_1_multi_subsystems 00:40:31.598 ************************************ 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 bdev_null0 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 [2024-07-22 10:56:35.598253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 bdev_null1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.598 { 00:40:31.598 "params": { 00:40:31.598 "name": "Nvme$subsystem", 00:40:31.598 "trtype": "$TEST_TRANSPORT", 00:40:31.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.598 "adrfam": "ipv4", 00:40:31.598 "trsvcid": "$NVMF_PORT", 00:40:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.598 "hdgst": ${hdgst:-false}, 00:40:31.598 "ddgst": ${ddgst:-false} 00:40:31.598 }, 00:40:31.598 "method": "bdev_nvme_attach_controller" 00:40:31.598 } 00:40:31.598 EOF 00:40:31.598 )") 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.598 { 00:40:31.598 "params": { 00:40:31.598 "name": "Nvme$subsystem", 00:40:31.598 "trtype": "$TEST_TRANSPORT", 00:40:31.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.598 "adrfam": "ipv4", 00:40:31.598 "trsvcid": "$NVMF_PORT", 00:40:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.598 "hdgst": ${hdgst:-false}, 00:40:31.598 "ddgst": ${ddgst:-false} 00:40:31.598 }, 00:40:31.598 "method": "bdev_nvme_attach_controller" 00:40:31.598 } 00:40:31.598 EOF 00:40:31.598 )") 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:31.598 "params": { 00:40:31.598 "name": "Nvme0", 00:40:31.598 "trtype": "tcp", 00:40:31.598 "traddr": "10.0.0.2", 00:40:31.598 "adrfam": "ipv4", 00:40:31.598 "trsvcid": "4420", 00:40:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.598 "hdgst": false, 00:40:31.598 "ddgst": false 00:40:31.598 }, 00:40:31.598 "method": "bdev_nvme_attach_controller" 00:40:31.598 },{ 00:40:31.598 "params": { 00:40:31.598 "name": "Nvme1", 00:40:31.598 "trtype": "tcp", 00:40:31.598 "traddr": "10.0.0.2", 00:40:31.598 "adrfam": "ipv4", 00:40:31.598 "trsvcid": "4420", 00:40:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:31.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:31.598 "hdgst": false, 00:40:31.598 "ddgst": false 00:40:31.598 }, 00:40:31.598 "method": "bdev_nvme_attach_controller" 00:40:31.598 }' 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:31.598 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:31.599 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:31.599 10:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.599 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:31.599 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:31.599 fio-3.35 00:40:31.599 Starting 2 threads 00:40:31.599 EAL: No free 2048 kB hugepages reported on node 1 00:40:41.631 00:40:41.631 filename0: (groupid=0, jobs=1): err= 0: pid=2284391: Mon Jul 22 10:56:46 2024 00:40:41.631 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10025msec) 00:40:41.631 slat (nsec): min=5413, max=74367, avg=6933.43, stdev=3416.99 00:40:41.631 clat (usec): min=40871, max=43008, avg=41576.42, stdev=478.92 00:40:41.631 lat (usec): min=40877, max=43016, avg=41583.35, stdev=479.09 00:40:41.631 clat percentiles (usec): 00:40:41.631 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:41.631 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:40:41.631 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:41.631 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:40:41.631 | 99.99th=[43254] 00:40:41.631 bw ( KiB/s): min= 384, max= 384, per=33.80%, avg=384.00, stdev= 0.00, samples=20 00:40:41.631 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:40:41.631 lat (msec) : 50=100.00% 00:40:41.631 cpu : usr=96.82%, sys=2.95%, ctx=10, majf=0, minf=202 00:40:41.631 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.631 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.631 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:41.631 filename1: (groupid=0, jobs=1): err= 0: pid=2284392: Mon Jul 22 10:56:46 2024 00:40:41.631 read: IOPS=187, BW=752KiB/s (770kB/s)(7536KiB/10027msec) 00:40:41.631 slat (nsec): min=5411, max=39371, avg=6571.51, stdev=2080.51 00:40:41.631 clat (usec): min=544, max=42643, avg=21269.58, stdev=20326.33 00:40:41.631 lat (usec): min=550, max=42683, avg=21276.15, stdev=20326.11 00:40:41.631 clat percentiles (usec): 00:40:41.631 | 1.00th=[ 570], 5.00th=[ 701], 10.00th=[ 750], 20.00th=[ 947], 00:40:41.631 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[41157], 60.00th=[41157], 00:40:41.631 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:40:41.631 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:41.631 | 99.99th=[42730] 00:40:41.631 bw ( KiB/s): min= 704, max= 768, per=66.19%, avg=752.00, stdev=28.43, samples=20 00:40:41.631 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:40:41.631 lat (usec) : 750=9.98%, 1000=33.92% 00:40:41.631 lat (msec) : 2=6.00%, 50=50.11% 00:40:41.631 cpu : usr=96.37%, sys=3.40%, ctx=14, majf=0, minf=119 00:40:41.631 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.631 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.631 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:41.631 00:40:41.631 Run status group 0 (all jobs): 00:40:41.631 READ: bw=1136KiB/s (1163kB/s), 385KiB/s-752KiB/s (394kB/s-770kB/s), io=11.1MiB (11.7MB), run=10025-10027msec 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 00:40:41.631 real 0m11.261s 00:40:41.631 user 0m35.800s 00:40:41.631 sys 0m0.994s 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 ************************************ 00:40:41.631 END TEST fio_dif_1_multi_subsystems 00:40:41.631 ************************************ 00:40:41.631 10:56:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:41.631 10:56:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:41.631 10:56:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:41.631 10:56:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 ************************************ 00:40:41.631 START TEST fio_dif_rand_params 00:40:41.631 ************************************ 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 bdev_null0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.631 [2024-07-22 10:56:46.939240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:41.631 { 00:40:41.631 "params": { 00:40:41.631 "name": "Nvme$subsystem", 00:40:41.631 "trtype": "$TEST_TRANSPORT", 00:40:41.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.631 "adrfam": "ipv4", 00:40:41.631 "trsvcid": "$NVMF_PORT", 00:40:41.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.631 "hdgst": ${hdgst:-false}, 00:40:41.631 "ddgst": ${ddgst:-false} 00:40:41.631 }, 00:40:41.631 "method": "bdev_nvme_attach_controller" 00:40:41.631 } 00:40:41.631 EOF 00:40:41.631 )") 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:41.631 "params": { 00:40:41.631 "name": "Nvme0", 00:40:41.631 "trtype": "tcp", 00:40:41.631 "traddr": "10.0.0.2", 00:40:41.631 "adrfam": "ipv4", 00:40:41.631 "trsvcid": "4420", 00:40:41.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:41.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:41.631 "hdgst": false, 00:40:41.631 "ddgst": false 00:40:41.631 }, 00:40:41.631 "method": "bdev_nvme_attach_controller" 00:40:41.631 }' 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:41.631 10:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:41.631 10:56:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:41.631 10:56:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:41.631 10:56:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:41.632 10:56:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.966 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:41.966 ... 00:40:41.966 fio-3.35 00:40:41.966 Starting 3 threads 00:40:41.966 EAL: No free 2048 kB hugepages reported on node 1 00:40:47.278 00:40:47.278 filename0: (groupid=0, jobs=1): err= 0: pid=2286733: Mon Jul 22 10:56:52 2024 00:40:47.278 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(160MiB/5030msec) 00:40:47.278 slat (nsec): min=5416, max=30861, avg=8032.99, stdev=1674.32 00:40:47.278 clat (usec): min=4633, max=53618, avg=11793.82, stdev=8083.30 00:40:47.278 lat (usec): min=4639, max=53625, avg=11801.85, stdev=8083.10 00:40:47.278 clat percentiles (usec): 00:40:47.278 | 1.00th=[ 5800], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 8848], 00:40:47.278 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10552], 00:40:47.278 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13173], 95.00th=[14484], 00:40:47.278 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740], 00:40:47.278 | 99.99th=[53740] 00:40:47.278 bw ( KiB/s): min=21504, max=38144, per=34.97%, avg=32640.00, stdev=4878.57, samples=10 00:40:47.278 iops : min= 168, max= 298, avg=255.00, stdev=38.11, samples=10 00:40:47.278 lat (msec) : 10=47.18%, 20=48.83%, 50=1.25%, 100=2.74% 00:40:47.278 cpu : usr=96.54%, sys=3.18%, ctx=10, majf=0, minf=59 00:40:47.278 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:47.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:47.278 filename0: (groupid=0, jobs=1): err= 0: pid=2286734: Mon Jul 22 10:56:52 2024 00:40:47.278 read: IOPS=267, BW=33.4MiB/s (35.1MB/s)(169MiB/5045msec) 00:40:47.278 slat (nsec): min=5411, max=32122, avg=7850.08, stdev=1487.81 00:40:47.278 clat (usec): min=4449, max=50576, avg=11169.75, stdev=5368.64 00:40:47.278 lat (usec): min=4454, max=50585, avg=11177.60, stdev=5368.62 00:40:47.278 clat percentiles (usec): 00:40:47.278 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 8356], 00:40:47.278 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10814], 60.00th=[11338], 00:40:47.278 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13304], 95.00th=[14222], 00:40:47.278 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:40:47.278 | 99.99th=[50594] 00:40:47.278 bw ( KiB/s): min=32000, max=38144, per=36.97%, avg=34508.80, stdev=2326.96, samples=10 00:40:47.278 iops : min= 250, max= 298, avg=269.60, stdev=18.18, samples=10 00:40:47.278 lat (msec) : 10=37.41%, 20=60.89%, 50=1.48%, 100=0.22% 00:40:47.278 cpu : usr=96.21%, sys=3.55%, ctx=7, majf=0, minf=85 00:40:47.278 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:47.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:47.278 filename0: (groupid=0, jobs=1): err= 0: pid=2286735: Mon Jul 22 10:56:52 2024 00:40:47.278 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(131MiB/5045msec) 00:40:47.278 slat (nsec): min=5419, max=54827, avg=7949.61, stdev=2326.15 00:40:47.278 clat (usec): min=5885, max=55972, avg=14348.41, stdev=6678.78 00:40:47.278 lat (usec): min=5894, max=55980, avg=14356.36, stdev=6678.78 00:40:47.278 clat percentiles (usec): 00:40:47.278 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11076], 00:40:47.278 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13435], 60.00th=[14353], 00:40:47.278 | 70.00th=[15139], 80.00th=[15926], 90.00th=[16909], 95.00th=[17957], 00:40:47.278 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54789], 99.95th=[55837], 00:40:47.278 | 99.99th=[55837] 00:40:47.278 bw ( KiB/s): min=19712, max=31232, per=28.77%, avg=26854.40, stdev=3218.89, samples=10 00:40:47.278 iops : min= 154, max= 244, avg=209.80, stdev=25.15, samples=10 00:40:47.278 lat (msec) : 10=9.04%, 20=88.11%, 50=1.14%, 100=1.71% 00:40:47.278 cpu : usr=96.21%, sys=3.55%, ctx=10, majf=0, minf=172 00:40:47.278 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:47.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.278 issued rwts: total=1051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:47.278 00:40:47.278 Run status group 0 (all jobs): 00:40:47.278 READ: bw=91.2MiB/s (95.6MB/s), 26.0MiB/s-33.4MiB/s (27.3MB/s-35.1MB/s), io=460MiB (482MB), run=5030-5045msec 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.278 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 bdev_null0 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 [2024-07-22 10:56:53.037653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 bdev_null1 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.538 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.539 bdev_null2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:47.539 { 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme$subsystem", 00:40:47.539 "trtype": "$TEST_TRANSPORT", 00:40:47.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "$NVMF_PORT", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.539 "hdgst": ${hdgst:-false}, 00:40:47.539 "ddgst": ${ddgst:-false} 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 } 00:40:47.539 EOF 00:40:47.539 )") 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:47.539 { 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme$subsystem", 00:40:47.539 "trtype": "$TEST_TRANSPORT", 00:40:47.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "$NVMF_PORT", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.539 "hdgst": ${hdgst:-false}, 00:40:47.539 "ddgst": ${ddgst:-false} 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 } 00:40:47.539 EOF 00:40:47.539 )") 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:47.539 { 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme$subsystem", 00:40:47.539 "trtype": "$TEST_TRANSPORT", 00:40:47.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "$NVMF_PORT", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.539 "hdgst": ${hdgst:-false}, 00:40:47.539 "ddgst": ${ddgst:-false} 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 } 00:40:47.539 EOF 00:40:47.539 )") 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme0", 00:40:47.539 "trtype": "tcp", 00:40:47.539 "traddr": "10.0.0.2", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "4420", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:47.539 "hdgst": false, 00:40:47.539 "ddgst": false 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 },{ 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme1", 00:40:47.539 "trtype": "tcp", 00:40:47.539 "traddr": "10.0.0.2", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "4420", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:47.539 "hdgst": false, 00:40:47.539 "ddgst": false 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 },{ 00:40:47.539 "params": { 00:40:47.539 "name": "Nvme2", 00:40:47.539 "trtype": "tcp", 00:40:47.539 "traddr": "10.0.0.2", 00:40:47.539 "adrfam": "ipv4", 00:40:47.539 "trsvcid": "4420", 00:40:47.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:47.539 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:47.539 "hdgst": false, 00:40:47.539 "ddgst": false 00:40:47.539 }, 00:40:47.539 "method": "bdev_nvme_attach_controller" 00:40:47.539 }' 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:47.539 10:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.106 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:48.106 ... 00:40:48.106 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:48.106 ... 00:40:48.106 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:48.106 ... 00:40:48.106 fio-3.35 00:40:48.106 Starting 24 threads 00:40:48.106 EAL: No free 2048 kB hugepages reported on node 1 00:41:00.318 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288097: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10013msec) 00:41:00.318 slat (nsec): min=5590, max=62924, avg=10011.37, stdev=6200.24 00:41:00.318 clat (usec): min=2922, max=34029, avg=31904.72, stdev=2649.54 00:41:00.318 lat (usec): min=2942, max=34035, avg=31914.73, stdev=2648.80 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[18482], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:41:00.318 | 99.99th=[33817] 00:41:00.318 bw ( KiB/s): min= 1916, max= 2304, per=4.23%, avg=2000.16, stdev=97.64, samples=19 00:41:00.318 iops : min= 479, max= 576, avg=500.00, stdev=24.39, samples=19 00:41:00.318 lat (msec) : 4=0.14%, 10=0.18%, 20=1.46%, 50=98.22% 00:41:00.318 cpu : usr=99.08%, sys=0.61%, ctx=21, majf=0, minf=49 00:41:00.318 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288098: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=490, BW=1962KiB/s (2010kB/s)(19.2MiB/10012msec) 00:41:00.318 slat (nsec): min=5616, max=58290, avg=15850.33, stdev=10265.46 00:41:00.318 clat (usec): min=19328, max=64241, avg=32470.60, stdev=2435.29 00:41:00.318 lat (usec): min=19335, max=64253, avg=32486.45, stdev=2435.24 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[34866], 99.50th=[58459], 99.90th=[64226], 99.95th=[64226], 00:41:00.318 | 99.99th=[64226] 00:41:00.318 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1967.11, stdev=74.90, samples=19 00:41:00.318 iops : min= 448, max= 512, avg=491.74, stdev=18.82, samples=19 00:41:00.318 lat (msec) : 20=0.04%, 50=99.31%, 100=0.65% 00:41:00.318 cpu : usr=98.96%, sys=0.66%, ctx=52, majf=0, minf=62 00:41:00.318 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288099: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.4MiB/10049msec) 00:41:00.318 slat (nsec): min=5638, max=86552, avg=21742.29, stdev=13502.30 00:41:00.318 clat (usec): min=18175, max=63719, avg=32225.96, stdev=2180.62 00:41:00.318 lat (usec): min=18192, max=63730, avg=32247.70, stdev=2180.64 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.318 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:41:00.318 | 99.00th=[33817], 99.50th=[36963], 99.90th=[63701], 99.95th=[63701], 00:41:00.318 | 99.99th=[63701] 00:41:00.318 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1977.10, stdev=64.78, samples=20 00:41:00.318 iops : min= 480, max= 512, avg=494.20, stdev=16.12, samples=20 00:41:00.318 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:41:00.318 cpu : usr=99.03%, sys=0.65%, ctx=57, majf=0, minf=68 00:41:00.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288100: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10046msec) 00:41:00.318 slat (nsec): min=5602, max=53100, avg=12768.14, stdev=9129.62 00:41:00.318 clat (usec): min=16932, max=73804, avg=32502.59, stdev=2964.10 00:41:00.318 lat (usec): min=16938, max=73814, avg=32515.36, stdev=2963.82 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[43779], 99.50th=[57410], 99.90th=[71828], 99.95th=[71828], 00:41:00.318 | 99.99th=[73925] 00:41:00.318 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1966.95, stdev=76.59, samples=19 00:41:00.318 iops : min= 448, max= 512, avg=491.74, stdev=19.15, samples=19 00:41:00.318 lat (msec) : 20=0.16%, 50=99.19%, 100=0.65% 00:41:00.318 cpu : usr=99.02%, sys=0.66%, ctx=59, majf=0, minf=57 00:41:00.318 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288101: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.4MiB/10047msec) 00:41:00.318 slat (nsec): min=5624, max=92825, avg=16511.18, stdev=14036.76 00:41:00.318 clat (usec): min=17894, max=64058, avg=32277.33, stdev=2206.14 00:41:00.318 lat (usec): min=17949, max=64065, avg=32293.84, stdev=2204.81 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[33817], 99.50th=[35390], 99.90th=[64226], 99.95th=[64226], 00:41:00.318 | 99.99th=[64226] 00:41:00.318 bw ( KiB/s): min= 1916, max= 2052, per=4.18%, avg=1977.60, stdev=65.76, samples=20 00:41:00.318 iops : min= 479, max= 513, avg=494.40, stdev=16.44, samples=20 00:41:00.318 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:41:00.318 cpu : usr=98.55%, sys=0.87%, ctx=47, majf=0, minf=67 00:41:00.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288102: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10040msec) 00:41:00.318 slat (nsec): min=5523, max=79436, avg=15940.88, stdev=10897.48 00:41:00.318 clat (usec): min=12732, max=58921, avg=32183.23, stdev=2935.92 00:41:00.318 lat (usec): min=12745, max=58929, avg=32199.17, stdev=2936.16 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[21365], 5.00th=[31327], 10.00th=[31851], 20.00th=[31851], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32637], 95.00th=[33162], 00:41:00.318 | 99.00th=[46400], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:41:00.318 | 99.99th=[58983] 00:41:00.318 bw ( KiB/s): min= 1788, max= 2144, per=4.20%, avg=1983.53, stdev=92.37, samples=19 00:41:00.318 iops : min= 447, max= 536, avg=495.84, stdev=23.02, samples=19 00:41:00.318 lat (msec) : 20=0.79%, 50=98.61%, 100=0.60% 00:41:00.318 cpu : usr=99.18%, sys=0.49%, ctx=74, majf=0, minf=80 00:41:00.318 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288103: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10007msec) 00:41:00.318 slat (usec): min=5, max=102, avg=25.02, stdev=17.08 00:41:00.318 clat (usec): min=17930, max=47743, avg=32167.34, stdev=1256.85 00:41:00.318 lat (usec): min=17938, max=47776, avg=32192.37, stdev=1256.44 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:41:00.318 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.318 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:41:00.318 | 99.00th=[34341], 99.50th=[37487], 99.90th=[47449], 99.95th=[47449], 00:41:00.318 | 99.99th=[47973] 00:41:00.318 bw ( KiB/s): min= 1888, max= 2048, per=4.17%, avg=1972.21, stdev=66.79, samples=19 00:41:00.318 iops : min= 472, max= 512, avg=493.05, stdev=16.70, samples=19 00:41:00.318 lat (msec) : 20=0.28%, 50=99.72% 00:41:00.318 cpu : usr=98.50%, sys=0.93%, ctx=264, majf=0, minf=70 00:41:00.318 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename0: (groupid=0, jobs=1): err= 0: pid=2288104: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.4MiB/10046msec) 00:41:00.318 slat (nsec): min=5614, max=59099, avg=12728.43, stdev=9171.10 00:41:00.318 clat (usec): min=14146, max=64053, avg=32301.96, stdev=2274.89 00:41:00.318 lat (usec): min=14154, max=64060, avg=32314.69, stdev=2274.46 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[33817], 99.50th=[34866], 99.90th=[64226], 99.95th=[64226], 00:41:00.318 | 99.99th=[64226] 00:41:00.318 bw ( KiB/s): min= 1920, max= 2052, per=4.18%, avg=1977.75, stdev=65.53, samples=20 00:41:00.318 iops : min= 480, max= 513, avg=494.40, stdev=16.34, samples=20 00:41:00.318 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:41:00.318 cpu : usr=98.67%, sys=0.77%, ctx=32, majf=0, minf=55 00:41:00.318 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename1: (groupid=0, jobs=1): err= 0: pid=2288105: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.6MiB/10068msec) 00:41:00.318 slat (nsec): min=5584, max=59779, avg=9547.69, stdev=5281.35 00:41:00.318 clat (usec): min=14949, max=71643, avg=31977.45, stdev=3250.96 00:41:00.318 lat (usec): min=14954, max=71652, avg=31986.99, stdev=3251.01 00:41:00.318 clat percentiles (usec): 00:41:00.318 | 1.00th=[19530], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:41:00.318 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.318 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.318 | 99.00th=[33817], 99.50th=[35390], 99.90th=[71828], 99.95th=[71828], 00:41:00.318 | 99.99th=[71828] 00:41:00.318 bw ( KiB/s): min= 1920, max= 2299, per=4.24%, avg=2002.70, stdev=94.43, samples=20 00:41:00.318 iops : min= 480, max= 574, avg=500.60, stdev=23.47, samples=20 00:41:00.318 lat (msec) : 20=1.95%, 50=97.73%, 100=0.32% 00:41:00.318 cpu : usr=99.06%, sys=0.62%, ctx=60, majf=0, minf=77 00:41:00.318 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.318 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.318 filename1: (groupid=0, jobs=1): err= 0: pid=2288106: Mon Jul 22 10:57:04 2024 00:41:00.318 read: IOPS=492, BW=1972KiB/s (2019kB/s)(19.3MiB/10033msec) 00:41:00.318 slat (nsec): min=5628, max=98204, avg=17531.12, stdev=12861.11 00:41:00.318 clat (usec): min=13998, max=58323, avg=32294.08, stdev=3052.80 00:41:00.318 lat (usec): min=14007, max=58338, avg=32311.61, stdev=3053.10 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.319 | 99.00th=[49546], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:41:00.319 | 99.99th=[58459] 00:41:00.319 bw ( KiB/s): min= 1795, max= 2096, per=4.18%, avg=1974.63, stdev=76.14, samples=19 00:41:00.319 iops : min= 448, max= 524, avg=493.58, stdev=19.10, samples=19 00:41:00.319 lat (msec) : 20=0.53%, 50=98.63%, 100=0.85% 00:41:00.319 cpu : usr=99.15%, sys=0.53%, ctx=39, majf=0, minf=94 00:41:00.319 IO depths : 1=5.3%, 2=11.2%, 4=24.0%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288107: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:41:00.319 slat (nsec): min=5588, max=77410, avg=14945.82, stdev=12495.79 00:41:00.319 clat (usec): min=18023, max=42122, avg=32248.82, stdev=1090.20 00:41:00.319 lat (usec): min=18032, max=42129, avg=32263.76, stdev=1088.49 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.319 | 99.00th=[33817], 99.50th=[34341], 99.90th=[41157], 99.95th=[41681], 00:41:00.319 | 99.99th=[42206] 00:41:00.319 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1973.89, stdev=64.93, samples=19 00:41:00.319 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:41:00.319 lat (msec) : 20=0.32%, 50=99.68% 00:41:00.319 cpu : usr=99.19%, sys=0.49%, ctx=44, majf=0, minf=60 00:41:00.319 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288108: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10037msec) 00:41:00.319 slat (nsec): min=5578, max=81013, avg=10801.63, stdev=8351.77 00:41:00.319 clat (usec): min=13097, max=68689, avg=32029.34, stdev=4565.18 00:41:00.319 lat (usec): min=13104, max=68708, avg=32040.15, stdev=4565.30 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[20841], 5.00th=[24773], 10.00th=[26084], 20.00th=[29230], 00:41:00.319 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:41:00.319 | 70.00th=[32637], 80.00th=[33162], 90.00th=[37487], 95.00th=[39584], 00:41:00.319 | 99.00th=[48497], 99.50th=[51643], 99.90th=[57934], 99.95th=[57934], 00:41:00.319 | 99.99th=[68682] 00:41:00.319 bw ( KiB/s): min= 1808, max= 2112, per=4.23%, avg=1998.05, stdev=78.68, samples=19 00:41:00.319 iops : min= 452, max= 528, avg=499.47, stdev=19.64, samples=19 00:41:00.319 lat (msec) : 20=0.36%, 50=98.80%, 100=0.84% 00:41:00.319 cpu : usr=98.99%, sys=0.57%, ctx=124, majf=0, minf=138 00:41:00.319 IO depths : 1=0.1%, 2=0.2%, 4=3.4%, 8=80.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=89.2%, 8=8.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288109: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10035msec) 00:41:00.319 slat (nsec): min=5576, max=88112, avg=20072.47, stdev=14647.14 00:41:00.319 clat (usec): min=23222, max=72133, avg=32304.28, stdev=3042.37 00:41:00.319 lat (usec): min=23228, max=72140, avg=32324.35, stdev=3041.57 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[25035], 5.00th=[26870], 10.00th=[31589], 20.00th=[31851], 00:41:00.319 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[36963], 00:41:00.319 | 99.00th=[41157], 99.50th=[51643], 99.90th=[71828], 99.95th=[71828], 00:41:00.319 | 99.99th=[71828] 00:41:00.319 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1974.47, stdev=68.56, samples=19 00:41:00.319 iops : min= 448, max= 512, avg=493.58, stdev=17.11, samples=19 00:41:00.319 lat (msec) : 50=99.35%, 100=0.65% 00:41:00.319 cpu : usr=98.93%, sys=0.70%, ctx=44, majf=0, minf=78 00:41:00.319 IO depths : 1=3.9%, 2=8.0%, 4=16.9%, 8=61.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=92.2%, 8=3.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288110: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.2MiB/10035msec) 00:41:00.319 slat (nsec): min=5565, max=77756, avg=13516.88, stdev=10332.87 00:41:00.319 clat (usec): min=16281, max=76721, avg=32580.28, stdev=5129.81 00:41:00.319 lat (usec): min=16309, max=76738, avg=32593.79, stdev=5129.69 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[19530], 5.00th=[25035], 10.00th=[26870], 20.00th=[31851], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:41:00.319 | 70.00th=[32637], 80.00th=[33424], 90.00th=[37487], 95.00th=[40633], 00:41:00.319 | 99.00th=[49546], 99.50th=[53740], 99.90th=[77071], 99.95th=[77071], 00:41:00.319 | 99.99th=[77071] 00:41:00.319 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1957.84, stdev=61.14, samples=19 00:41:00.319 iops : min= 448, max= 512, avg=489.42, stdev=15.40, samples=19 00:41:00.319 lat (msec) : 20=1.10%, 50=98.01%, 100=0.90% 00:41:00.319 cpu : usr=98.44%, sys=0.88%, ctx=97, majf=0, minf=80 00:41:00.319 IO depths : 1=1.8%, 2=3.5%, 4=9.1%, 8=72.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=90.3%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288111: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10043msec) 00:41:00.319 slat (nsec): min=5583, max=54898, avg=13506.02, stdev=9172.23 00:41:00.319 clat (usec): min=19984, max=71790, avg=32482.26, stdev=2752.87 00:41:00.319 lat (usec): min=19990, max=71818, avg=32495.77, stdev=2752.60 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.319 | 99.00th=[33817], 99.50th=[56361], 99.90th=[71828], 99.95th=[71828], 00:41:00.319 | 99.99th=[71828] 00:41:00.319 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1967.11, stdev=76.21, samples=19 00:41:00.319 iops : min= 448, max= 512, avg=491.74, stdev=19.15, samples=19 00:41:00.319 lat (msec) : 20=0.04%, 50=99.31%, 100=0.65% 00:41:00.319 cpu : usr=99.22%, sys=0.44%, ctx=71, majf=0, minf=58 00:41:00.319 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename1: (groupid=0, jobs=1): err= 0: pid=2288112: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:41:00.319 slat (usec): min=5, max=102, avg=23.82, stdev=15.96 00:41:00.319 clat (usec): min=18020, max=37089, avg=32166.01, stdev=959.00 00:41:00.319 lat (usec): min=18065, max=37119, avg=32189.83, stdev=958.05 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.319 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.319 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:41:00.319 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:41:00.319 | 99.99th=[36963] 00:41:00.319 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1973.89, stdev=64.93, samples=19 00:41:00.319 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:41:00.319 lat (msec) : 20=0.32%, 50=99.68% 00:41:00.319 cpu : usr=98.89%, sys=0.72%, ctx=55, majf=0, minf=71 00:41:00.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename2: (groupid=0, jobs=1): err= 0: pid=2288113: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.4MiB/10068msec) 00:41:00.319 slat (nsec): min=5588, max=89156, avg=21192.68, stdev=15901.14 00:41:00.319 clat (usec): min=16512, max=71672, avg=32294.69, stdev=2687.24 00:41:00.319 lat (usec): min=16541, max=71681, avg=32315.89, stdev=2686.06 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.319 | 99.00th=[34341], 99.50th=[37487], 99.90th=[71828], 99.95th=[71828], 00:41:00.319 | 99.99th=[71828] 00:41:00.319 bw ( KiB/s): min= 1916, max= 2048, per=4.18%, avg=1977.15, stdev=65.25, samples=20 00:41:00.319 iops : min= 479, max= 512, avg=494.25, stdev=16.27, samples=20 00:41:00.319 lat (msec) : 20=0.77%, 50=98.91%, 100=0.32% 00:41:00.319 cpu : usr=99.09%, sys=0.56%, ctx=138, majf=0, minf=62 00:41:00.319 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename2: (groupid=0, jobs=1): err= 0: pid=2288114: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=506, BW=2028KiB/s (2077kB/s)(19.8MiB/10016msec) 00:41:00.319 slat (nsec): min=5520, max=71997, avg=8151.43, stdev=3558.53 00:41:00.319 clat (usec): min=1215, max=34021, avg=31486.30, stdev=4454.74 00:41:00.319 lat (usec): min=1231, max=34029, avg=31494.45, stdev=4453.01 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[ 2933], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:41:00.319 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:41:00.319 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.319 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:41:00.319 | 99.99th=[33817] 00:41:00.319 bw ( KiB/s): min= 1920, max= 2864, per=4.28%, avg=2024.80, stdev=207.61, samples=20 00:41:00.319 iops : min= 480, max= 716, avg=506.20, stdev=51.90, samples=20 00:41:00.319 lat (msec) : 2=0.18%, 4=1.67%, 10=0.16%, 20=1.26%, 50=96.73% 00:41:00.319 cpu : usr=98.50%, sys=1.19%, ctx=29, majf=0, minf=115 00:41:00.319 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename2: (groupid=0, jobs=1): err= 0: pid=2288115: Mon Jul 22 10:57:04 2024 00:41:00.319 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10037msec) 00:41:00.319 slat (usec): min=5, max=101, avg=24.79, stdev=15.57 00:41:00.319 clat (usec): min=30624, max=59042, avg=32341.57, stdev=1922.48 00:41:00.319 lat (usec): min=30638, max=59050, avg=32366.37, stdev=1921.56 00:41:00.319 clat percentiles (usec): 00:41:00.319 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.319 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:41:00.319 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:41:00.319 | 99.00th=[34341], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:41:00.319 | 99.99th=[58983] 00:41:00.319 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1966.89, stdev=76.16, samples=19 00:41:00.319 iops : min= 448, max= 512, avg=491.68, stdev=19.00, samples=19 00:41:00.319 lat (msec) : 50=99.35%, 100=0.65% 00:41:00.319 cpu : usr=99.02%, sys=0.61%, ctx=54, majf=0, minf=62 00:41:00.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:00.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.319 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.319 filename2: (groupid=0, jobs=1): err= 0: pid=2288116: Mon Jul 22 10:57:04 2024 00:41:00.320 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.4MiB/10048msec) 00:41:00.320 slat (nsec): min=5617, max=67690, avg=16470.15, stdev=11403.00 00:41:00.320 clat (usec): min=17700, max=63595, avg=32266.85, stdev=2202.13 00:41:00.320 lat (usec): min=17726, max=63603, avg=32283.32, stdev=2202.19 00:41:00.320 clat percentiles (usec): 00:41:00.320 | 1.00th=[25560], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.320 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.320 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32637], 95.00th=[33162], 00:41:00.320 | 99.00th=[34341], 99.50th=[37487], 99.90th=[63701], 99.95th=[63701], 00:41:00.320 | 99.99th=[63701] 00:41:00.320 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1977.10, stdev=64.78, samples=20 00:41:00.320 iops : min= 480, max= 512, avg=494.20, stdev=16.12, samples=20 00:41:00.320 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:41:00.320 cpu : usr=98.36%, sys=1.07%, ctx=39, majf=0, minf=57 00:41:00.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.320 filename2: (groupid=0, jobs=1): err= 0: pid=2288117: Mon Jul 22 10:57:04 2024 00:41:00.320 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10013msec) 00:41:00.320 slat (nsec): min=5634, max=76443, avg=19259.05, stdev=11649.14 00:41:00.320 clat (usec): min=22825, max=64084, avg=32449.29, stdev=2205.74 00:41:00.320 lat (usec): min=22832, max=64108, avg=32468.55, stdev=2205.48 00:41:00.320 clat percentiles (usec): 00:41:00.320 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:41:00.320 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.320 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.320 | 99.00th=[39584], 99.50th=[51643], 99.90th=[63701], 99.95th=[64226], 00:41:00.320 | 99.99th=[64226] 00:41:00.320 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1966.95, stdev=76.59, samples=19 00:41:00.320 iops : min= 448, max= 512, avg=491.74, stdev=19.15, samples=19 00:41:00.320 lat (msec) : 50=99.35%, 100=0.65% 00:41:00.320 cpu : usr=98.56%, sys=0.87%, ctx=39, majf=0, minf=58 00:41:00.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.320 filename2: (groupid=0, jobs=1): err= 0: pid=2288118: Mon Jul 22 10:57:04 2024 00:41:00.320 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10025msec) 00:41:00.320 slat (nsec): min=5587, max=55972, avg=13965.67, stdev=9821.09 00:41:00.320 clat (usec): min=18501, max=64093, avg=32274.60, stdev=3321.15 00:41:00.320 lat (usec): min=18509, max=64100, avg=32288.57, stdev=3321.47 00:41:00.320 clat percentiles (usec): 00:41:00.320 | 1.00th=[20579], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.320 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.320 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.320 | 99.00th=[46400], 99.50th=[57410], 99.90th=[64226], 99.95th=[64226], 00:41:00.320 | 99.99th=[64226] 00:41:00.320 bw ( KiB/s): min= 1792, max= 2144, per=4.20%, avg=1983.79, stdev=84.58, samples=19 00:41:00.320 iops : min= 448, max= 536, avg=495.95, stdev=21.15, samples=19 00:41:00.320 lat (msec) : 20=0.95%, 50=98.24%, 100=0.81% 00:41:00.320 cpu : usr=99.28%, sys=0.43%, ctx=9, majf=0, minf=75 00:41:00.320 IO depths : 1=5.6%, 2=11.5%, 4=23.8%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:00.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.320 filename2: (groupid=0, jobs=1): err= 0: pid=2288119: Mon Jul 22 10:57:04 2024 00:41:00.320 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10006msec) 00:41:00.320 slat (nsec): min=5587, max=88360, avg=17839.91, stdev=15226.82 00:41:00.320 clat (usec): min=17952, max=42148, avg=32230.16, stdev=1170.30 00:41:00.320 lat (usec): min=17993, max=42155, avg=32248.00, stdev=1168.06 00:41:00.320 clat percentiles (usec): 00:41:00.320 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:41:00.320 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.320 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:41:00.320 | 99.00th=[34341], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:41:00.320 | 99.99th=[42206] 00:41:00.320 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1974.05, stdev=64.79, samples=19 00:41:00.320 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:41:00.320 lat (msec) : 20=0.32%, 50=99.68% 00:41:00.320 cpu : usr=98.88%, sys=0.71%, ctx=110, majf=0, minf=71 00:41:00.320 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:00.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.320 filename2: (groupid=0, jobs=1): err= 0: pid=2288120: Mon Jul 22 10:57:04 2024 00:41:00.320 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10005msec) 00:41:00.320 slat (nsec): min=5564, max=77341, avg=15908.58, stdev=12026.67 00:41:00.320 clat (usec): min=18485, max=37696, avg=32242.83, stdev=1048.46 00:41:00.320 lat (usec): min=18491, max=37704, avg=32258.74, stdev=1048.09 00:41:00.320 clat percentiles (usec): 00:41:00.320 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:41:00.320 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:41:00.320 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:41:00.320 | 99.00th=[33817], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:41:00.320 | 99.99th=[37487] 00:41:00.320 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1973.89, stdev=64.93, samples=19 00:41:00.320 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:41:00.320 lat (msec) : 20=0.32%, 50=99.68% 00:41:00.320 cpu : usr=99.21%, sys=0.49%, ctx=8, majf=0, minf=73 00:41:00.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:00.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.320 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:00.320 00:41:00.320 Run status group 0 (all jobs): 00:41:00.320 READ: bw=46.2MiB/s (48.4MB/s), 1959KiB/s-2028KiB/s (2006kB/s-2077kB/s), io=465MiB (487MB), run=10004-10068msec 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 bdev_null0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 [2024-07-22 10:57:05.010692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 bdev_null1 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:00.320 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:00.321 { 00:41:00.321 "params": { 00:41:00.321 "name": "Nvme$subsystem", 00:41:00.321 "trtype": "$TEST_TRANSPORT", 00:41:00.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:00.321 "adrfam": "ipv4", 00:41:00.321 "trsvcid": "$NVMF_PORT", 00:41:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:00.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:00.321 "hdgst": ${hdgst:-false}, 00:41:00.321 "ddgst": ${ddgst:-false} 00:41:00.321 }, 00:41:00.321 "method": "bdev_nvme_attach_controller" 00:41:00.321 } 00:41:00.321 EOF 00:41:00.321 )") 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:00.321 { 00:41:00.321 "params": { 00:41:00.321 "name": "Nvme$subsystem", 00:41:00.321 "trtype": "$TEST_TRANSPORT", 00:41:00.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:00.321 "adrfam": "ipv4", 00:41:00.321 "trsvcid": "$NVMF_PORT", 00:41:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:00.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:00.321 "hdgst": ${hdgst:-false}, 00:41:00.321 "ddgst": ${ddgst:-false} 00:41:00.321 }, 00:41:00.321 "method": "bdev_nvme_attach_controller" 00:41:00.321 } 00:41:00.321 EOF 00:41:00.321 )") 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:00.321 "params": { 00:41:00.321 "name": "Nvme0", 00:41:00.321 "trtype": "tcp", 00:41:00.321 "traddr": "10.0.0.2", 00:41:00.321 "adrfam": "ipv4", 00:41:00.321 "trsvcid": "4420", 00:41:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:00.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:00.321 "hdgst": false, 00:41:00.321 "ddgst": false 00:41:00.321 }, 00:41:00.321 "method": "bdev_nvme_attach_controller" 00:41:00.321 },{ 00:41:00.321 "params": { 00:41:00.321 "name": "Nvme1", 00:41:00.321 "trtype": "tcp", 00:41:00.321 "traddr": "10.0.0.2", 00:41:00.321 "adrfam": "ipv4", 00:41:00.321 "trsvcid": "4420", 00:41:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:00.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:00.321 "hdgst": false, 00:41:00.321 "ddgst": false 00:41:00.321 }, 00:41:00.321 "method": "bdev_nvme_attach_controller" 00:41:00.321 }' 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:00.321 10:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.321 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:00.321 ... 00:41:00.321 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:00.321 ... 00:41:00.321 fio-3.35 00:41:00.321 Starting 4 threads 00:41:00.321 EAL: No free 2048 kB hugepages reported on node 1 00:41:06.889 00:41:06.889 filename0: (groupid=0, jobs=1): err= 0: pid=2290719: Mon Jul 22 10:57:11 2024 00:41:06.889 read: IOPS=2247, BW=17.6MiB/s (18.4MB/s)(87.8MiB/5002msec) 00:41:06.889 slat (nsec): min=5390, max=72443, avg=6296.55, stdev=1630.88 00:41:06.889 clat (usec): min=1718, max=5677, avg=3544.74, stdev=477.47 00:41:06.889 lat (usec): min=1724, max=5684, avg=3551.03, stdev=477.43 00:41:06.889 clat percentiles (usec): 00:41:06.889 | 1.00th=[ 2376], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3163], 00:41:06.889 | 30.00th=[ 3326], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:41:06.889 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4080], 95.00th=[ 4424], 00:41:06.889 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5538], 00:41:06.889 | 99.99th=[ 5604] 00:41:06.889 bw ( KiB/s): min=17520, max=18560, per=26.65%, avg=17974.40, stdev=309.99, samples=10 00:41:06.889 iops : min= 2190, max= 2320, avg=2246.80, stdev=38.75, samples=10 00:41:06.889 lat (msec) : 2=0.19%, 4=85.98%, 10=13.83% 00:41:06.889 cpu : usr=97.96%, sys=1.80%, ctx=5, majf=0, minf=2 00:41:06.889 IO depths : 1=0.2%, 2=1.4%, 4=65.1%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 issued rwts: total=11240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.889 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:06.889 filename0: (groupid=0, jobs=1): err= 0: pid=2290720: Mon Jul 22 10:57:11 2024 00:41:06.889 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5002msec) 00:41:06.889 slat (nsec): min=5397, max=61069, avg=8145.12, stdev=1810.94 00:41:06.889 clat (usec): min=1446, max=7089, avg=3975.50, stdev=715.10 00:41:06.889 lat (usec): min=1452, max=7118, avg=3983.64, stdev=714.96 00:41:06.889 clat percentiles (usec): 00:41:06.889 | 1.00th=[ 3032], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3490], 00:41:06.889 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:41:06.889 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 5407], 95.00th=[ 5604], 00:41:06.889 | 99.00th=[ 5997], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6783], 00:41:06.889 | 99.99th=[ 7046] 00:41:06.889 bw ( KiB/s): min=15712, max=16240, per=23.73%, avg=16003.20, stdev=205.15, samples=10 00:41:06.889 iops : min= 1964, max= 2030, avg=2000.40, stdev=25.64, samples=10 00:41:06.889 lat (msec) : 2=0.12%, 4=70.08%, 10=29.80% 00:41:06.889 cpu : usr=97.38%, sys=2.36%, ctx=7, majf=0, minf=9 00:41:06.889 IO depths : 1=0.1%, 2=0.8%, 4=71.0%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 issued rwts: total=10010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.889 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:06.889 filename1: (groupid=0, jobs=1): err= 0: pid=2290721: Mon Jul 22 10:57:11 2024 00:41:06.889 read: IOPS=1991, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5001msec) 00:41:06.889 slat (nsec): min=5381, max=72288, avg=5953.87, stdev=1710.40 00:41:06.889 clat (usec): min=1880, max=6874, avg=4000.12, stdev=742.11 00:41:06.889 lat (usec): min=1897, max=6879, avg=4006.08, stdev=741.99 00:41:06.889 clat percentiles (usec): 00:41:06.889 | 1.00th=[ 3064], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3490], 00:41:06.889 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3818], 00:41:06.889 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 5407], 95.00th=[ 5735], 00:41:06.889 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6325], 99.95th=[ 6390], 00:41:06.889 | 99.99th=[ 6849] 00:41:06.889 bw ( KiB/s): min=15328, max=16720, per=23.62%, avg=15928.00, stdev=361.43, samples=10 00:41:06.889 iops : min= 1916, max= 2090, avg=1991.00, stdev=45.18, samples=10 00:41:06.889 lat (msec) : 2=0.03%, 4=71.04%, 10=28.93% 00:41:06.889 cpu : usr=97.50%, sys=2.26%, ctx=7, majf=0, minf=0 00:41:06.889 IO depths : 1=0.1%, 2=0.2%, 4=72.0%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.889 issued rwts: total=9961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.889 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:06.889 filename1: (groupid=0, jobs=1): err= 0: pid=2290722: Mon Jul 22 10:57:11 2024 00:41:06.889 read: IOPS=2190, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5002msec) 00:41:06.889 slat (nsec): min=5402, max=50276, avg=6016.63, stdev=1558.63 00:41:06.889 clat (usec): min=1280, max=7248, avg=3635.70, stdev=670.76 00:41:06.889 lat (usec): min=1286, max=7254, avg=3641.72, stdev=670.81 00:41:06.889 clat percentiles (usec): 00:41:06.890 | 1.00th=[ 2540], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3163], 00:41:06.890 | 30.00th=[ 3294], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3654], 00:41:06.890 | 70.00th=[ 3752], 80.00th=[ 3949], 90.00th=[ 4621], 95.00th=[ 5211], 00:41:06.890 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6390], 00:41:06.890 | 99.99th=[ 7242] 00:41:06.890 bw ( KiB/s): min=16753, max=18160, per=25.98%, avg=17521.70, stdev=480.14, samples=10 00:41:06.890 iops : min= 2094, max= 2270, avg=2190.20, stdev=60.04, samples=10 00:41:06.890 lat (msec) : 2=0.20%, 4=80.30%, 10=19.49% 00:41:06.890 cpu : usr=97.58%, sys=2.00%, ctx=149, majf=0, minf=9 00:41:06.890 IO depths : 1=0.1%, 2=0.5%, 4=70.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.890 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.890 issued rwts: total=10957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.890 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:06.890 00:41:06.890 Run status group 0 (all jobs): 00:41:06.890 READ: bw=65.9MiB/s (69.1MB/s), 15.6MiB/s-17.6MiB/s (16.3MB/s-18.4MB/s), io=329MiB (345MB), run=5001-5002msec 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 00:41:06.890 real 0m24.577s 00:41:06.890 user 5m17.132s 00:41:06.890 sys 0m3.673s 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 ************************************ 00:41:06.890 END TEST fio_dif_rand_params 00:41:06.890 ************************************ 00:41:06.890 10:57:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:06.890 10:57:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:06.890 10:57:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:06.890 10:57:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 ************************************ 00:41:06.890 START TEST fio_dif_digest 00:41:06.890 ************************************ 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 bdev_null0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:06.890 [2024-07-22 10:57:11.600540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:06.890 { 00:41:06.890 "params": { 00:41:06.890 "name": "Nvme$subsystem", 00:41:06.890 "trtype": "$TEST_TRANSPORT", 00:41:06.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.890 "adrfam": "ipv4", 00:41:06.890 "trsvcid": "$NVMF_PORT", 00:41:06.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.890 "hdgst": ${hdgst:-false}, 00:41:06.890 "ddgst": ${ddgst:-false} 00:41:06.890 }, 00:41:06.890 "method": "bdev_nvme_attach_controller" 00:41:06.890 } 00:41:06.890 EOF 00:41:06.890 )") 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:06.890 "params": { 00:41:06.890 "name": "Nvme0", 00:41:06.890 "trtype": "tcp", 00:41:06.890 "traddr": "10.0.0.2", 00:41:06.890 "adrfam": "ipv4", 00:41:06.890 "trsvcid": "4420", 00:41:06.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.890 "hdgst": true, 00:41:06.890 "ddgst": true 00:41:06.890 }, 00:41:06.890 "method": "bdev_nvme_attach_controller" 00:41:06.890 }' 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:06.890 10:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.890 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:06.890 ... 00:41:06.890 fio-3.35 00:41:06.890 Starting 3 threads 00:41:06.890 EAL: No free 2048 kB hugepages reported on node 1 00:41:16.871 00:41:16.871 filename0: (groupid=0, jobs=1): err= 0: pid=2292365: Mon Jul 22 10:57:22 2024 00:41:16.871 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(278MiB/10047msec) 00:41:16.871 slat (nsec): min=5767, max=32581, avg=6551.97, stdev=1244.28 00:41:16.871 clat (usec): min=6815, max=54046, avg=13524.30, stdev=1672.14 00:41:16.871 lat (usec): min=6823, max=54052, avg=13530.85, stdev=1671.88 00:41:16.871 clat percentiles (usec): 00:41:16.871 | 1.00th=[ 9110], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:41:16.871 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:41:16.871 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:41:16.871 | 99.00th=[16188], 99.50th=[16319], 99.90th=[17171], 99.95th=[51119], 00:41:16.871 | 99.99th=[54264] 00:41:16.871 bw ( KiB/s): min=27648, max=30208, per=34.03%, avg=28441.60, stdev=637.43, samples=20 00:41:16.871 iops : min= 216, max= 236, avg=222.20, stdev= 4.98, samples=20 00:41:16.871 lat (msec) : 10=1.48%, 20=98.43%, 100=0.09% 00:41:16.871 cpu : usr=95.71%, sys=4.07%, ctx=23, majf=0, minf=125 00:41:16.871 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:16.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:16.871 filename0: (groupid=0, jobs=1): err= 0: pid=2292366: Mon Jul 22 10:57:22 2024 00:41:16.871 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(279MiB/10044msec) 00:41:16.871 slat (nsec): min=5662, max=32471, avg=6499.84, stdev=901.52 00:41:16.871 clat (usec): min=9981, max=57024, avg=13479.24, stdev=3086.05 00:41:16.871 lat (usec): min=9987, max=57057, avg=13485.74, stdev=3086.28 00:41:16.871 clat percentiles (usec): 00:41:16.871 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:41:16.871 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:41:16.871 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:41:16.871 | 99.00th=[16188], 99.50th=[17171], 99.90th=[55837], 99.95th=[56886], 00:41:16.871 | 99.99th=[56886] 00:41:16.871 bw ( KiB/s): min=25856, max=30208, per=34.13%, avg=28531.20, stdev=1244.41, samples=20 00:41:16.871 iops : min= 202, max= 236, avg=222.90, stdev= 9.72, samples=20 00:41:16.871 lat (msec) : 10=0.04%, 20=99.46%, 50=0.04%, 100=0.45% 00:41:16.871 cpu : usr=95.68%, sys=4.08%, ctx=38, majf=0, minf=166 00:41:16.871 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:16.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:16.871 filename0: (groupid=0, jobs=1): err= 0: pid=2292367: Mon Jul 22 10:57:22 2024 00:41:16.871 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10046msec) 00:41:16.871 slat (nsec): min=5779, max=32878, avg=6501.09, stdev=965.66 00:41:16.871 clat (usec): min=8711, max=53538, avg=14282.72, stdev=2131.14 00:41:16.871 lat (usec): min=8718, max=53571, avg=14289.22, stdev=2131.47 00:41:16.871 clat percentiles (usec): 00:41:16.871 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12911], 20.00th=[13304], 00:41:16.871 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14484], 00:41:16.871 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:41:16.871 | 99.00th=[17171], 99.50th=[17957], 99.90th=[51643], 99.95th=[51643], 00:41:16.871 | 99.99th=[53740] 00:41:16.871 bw ( KiB/s): min=24576, max=28416, per=32.22%, avg=26931.20, stdev=812.10, samples=20 00:41:16.871 iops : min= 192, max= 222, avg=210.40, stdev= 6.34, samples=20 00:41:16.871 lat (msec) : 10=0.43%, 20=99.34%, 50=0.09%, 100=0.14% 00:41:16.871 cpu : usr=95.54%, sys=4.24%, ctx=26, majf=0, minf=102 00:41:16.871 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:16.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.871 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:16.871 00:41:16.871 Run status group 0 (all jobs): 00:41:16.871 READ: bw=81.6MiB/s (85.6MB/s), 26.2MiB/s-27.8MiB/s (27.5MB/s-29.1MB/s), io=820MiB (860MB), run=10044-10047msec 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:17.131 00:41:17.131 real 0m11.127s 00:41:17.131 user 0m40.385s 00:41:17.131 sys 0m1.540s 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:17.131 10:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:17.131 ************************************ 00:41:17.131 END TEST fio_dif_digest 00:41:17.131 ************************************ 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:17.131 10:57:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:17.131 10:57:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:17.131 rmmod nvme_tcp 00:41:17.131 rmmod nvme_fabrics 00:41:17.131 rmmod nvme_keyring 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2281659 ']' 00:41:17.131 10:57:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2281659 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2281659 ']' 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2281659 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:17.131 10:57:22 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2281659 00:41:17.391 10:57:22 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:17.391 10:57:22 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:17.391 10:57:22 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2281659' 00:41:17.391 killing process with pid 2281659 00:41:17.391 10:57:22 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2281659 00:41:17.391 10:57:22 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2281659 00:41:17.391 10:57:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:17.391 10:57:22 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:21.585 Waiting for block devices as requested 00:41:21.585 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:21.585 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:21.844 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:21.844 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:21.844 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:22.104 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:22.104 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:22.104 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:22.363 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:22.363 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:22.363 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:22.363 10:57:27 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:22.363 10:57:27 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:22.363 10:57:27 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:22.363 10:57:27 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:22.363 10:57:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.363 10:57:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:22.363 10:57:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.901 10:57:30 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:24.901 00:41:24.901 real 1m18.294s 00:41:24.901 user 8m1.579s 00:41:24.901 sys 0m20.275s 00:41:24.901 10:57:30 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:24.901 10:57:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:24.901 ************************************ 00:41:24.901 END TEST nvmf_dif 00:41:24.901 ************************************ 00:41:24.901 10:57:30 -- common/autotest_common.sh@1142 -- # return 0 00:41:24.901 10:57:30 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:24.901 10:57:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:24.901 10:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:24.901 10:57:30 -- common/autotest_common.sh@10 -- # set +x 00:41:24.901 ************************************ 00:41:24.901 START TEST nvmf_abort_qd_sizes 00:41:24.901 ************************************ 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:24.901 * Looking for test storage... 00:41:24.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:24.901 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:41:24.902 10:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:33.077 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:33.078 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:33.078 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:33.078 Found net devices under 0000:31:00.0: cvl_0_0 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:33.078 Found net devices under 0000:31:00.1: cvl_0_1 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:33.078 10:57:37 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:33.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:33.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:41:33.078 00:41:33.078 --- 10.0.0.2 ping statistics --- 00:41:33.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:33.078 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:33.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:33.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:41:33.078 00:41:33.078 --- 10.0.0.1 ping statistics --- 00:41:33.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:33.078 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:33.078 10:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:36.377 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:36.377 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2302619 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2302619 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2302619 ']' 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:36.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:36.377 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:36.637 [2024-07-22 10:57:42.117877] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:41:36.637 [2024-07-22 10:57:42.117926] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:36.637 EAL: No free 2048 kB hugepages reported on node 1 00:41:36.637 [2024-07-22 10:57:42.188517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:36.637 [2024-07-22 10:57:42.221784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:36.637 [2024-07-22 10:57:42.221818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:36.637 [2024-07-22 10:57:42.221826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:36.637 [2024-07-22 10:57:42.221832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:36.637 [2024-07-22 10:57:42.221838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:36.637 [2024-07-22 10:57:42.221971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:36.637 [2024-07-22 10:57:42.222100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:36.637 [2024-07-22 10:57:42.222259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:36.637 [2024-07-22 10:57:42.222260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:37.206 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:37.206 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:41:37.206 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:37.206 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:37.206 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:37.486 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:37.486 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:37.487 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:37.487 ************************************ 00:41:37.487 START TEST spdk_target_abort 00:41:37.487 ************************************ 00:41:37.487 10:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:41:37.487 10:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:37.487 10:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:37.487 10:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:37.487 10:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.749 spdk_targetn1 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.750 [2024-07-22 10:57:43.293420] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.750 [2024-07-22 10:57:43.333680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:37.750 10:57:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:37.750 EAL: No free 2048 kB hugepages reported on node 1 00:41:38.040 [2024-07-22 10:57:43.624555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:608 len:8 PRP1 0x2000078be000 PRP2 0x0 00:41:38.040 [2024-07-22 10:57:43.624578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004f p:1 m:0 dnr:0 00:41:38.040 [2024-07-22 10:57:43.634155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:976 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:41:38.040 [2024-07-22 10:57:43.634171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:007d p:1 m:0 dnr:0 00:41:38.040 [2024-07-22 10:57:43.671806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2264 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:41:38.040 [2024-07-22 10:57:43.671823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:38.364 [2024-07-22 10:57:43.719824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3952 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:41:38.364 [2024-07-22 10:57:43.719843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:41:41.650 Initializing NVMe Controllers 00:41:41.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:41.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:41.650 Initialization complete. Launching workers. 00:41:41.650 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12534, failed: 4 00:41:41.650 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2753, failed to submit 9785 00:41:41.650 success 759, unsuccess 1994, failed 0 00:41:41.650 10:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:41.650 10:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:41.650 EAL: No free 2048 kB hugepages reported on node 1 00:41:41.650 [2024-07-22 10:57:46.818607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:608 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:41:41.650 [2024-07-22 10:57:46.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:41:41.650 [2024-07-22 10:57:46.858529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1576 len:8 PRP1 0x200007c40000 PRP2 0x0 00:41:41.650 [2024-07-22 10:57:46.858552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:41:44.940 Initializing NVMe Controllers 00:41:44.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:44.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:44.940 Initialization complete. Launching workers. 00:41:44.940 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8520, failed: 2 00:41:44.940 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 7324 00:41:44.940 success 336, unsuccess 862, failed 0 00:41:44.940 10:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:44.940 10:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:44.940 EAL: No free 2048 kB hugepages reported on node 1 00:41:44.940 [2024-07-22 10:57:50.121053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:158 nsid:1 lba:3528 len:8 PRP1 0x200007914000 PRP2 0x0 00:41:44.940 [2024-07-22 10:57:50.121084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:158 cdw0:0 sqhd:00c1 p:1 m:0 dnr:0 00:41:45.200 [2024-07-22 10:57:50.741824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:73272 len:8 PRP1 0x200007922000 PRP2 0x0 00:41:45.200 [2024-07-22 10:57:50.741849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c2 p:1 m:0 dnr:0 00:41:47.108 [2024-07-22 10:57:52.382898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:257168 len:8 PRP1 0x20000791e000 PRP2 0x0 00:41:47.108 [2024-07-22 10:57:52.382923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0098 p:1 m:0 dnr:0 00:41:47.678 Initializing NVMe Controllers 00:41:47.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:47.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:47.678 Initialization complete. Launching workers. 00:41:47.678 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42050, failed: 3 00:41:47.678 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2730, failed to submit 39323 00:41:47.678 success 596, unsuccess 2134, failed 0 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:47.678 10:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2302619 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2302619 ']' 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2302619 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:49.581 10:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2302619 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2302619' 00:41:49.581 killing process with pid 2302619 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2302619 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2302619 00:41:49.581 00:41:49.581 real 0m12.163s 00:41:49.581 user 0m49.690s 00:41:49.581 sys 0m1.741s 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.581 ************************************ 00:41:49.581 END TEST spdk_target_abort 00:41:49.581 ************************************ 00:41:49.581 10:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:49.581 10:57:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:49.581 10:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:49.581 10:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:49.581 10:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:49.581 ************************************ 00:41:49.581 START TEST kernel_target_abort 00:41:49.581 ************************************ 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:49.581 10:57:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:53.782 Waiting for block devices as requested 00:41:53.782 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:53.782 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:53.782 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:53.782 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:53.782 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:53.782 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:54.042 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:54.042 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:54.042 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:54.304 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:54.304 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:54.304 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:54.304 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:54.564 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:54.564 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:54.564 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:54.826 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:54.826 No valid GPT data, bailing 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:41:54.826 00:41:54.826 Discovery Log Number of Records 2, Generation counter 2 00:41:54.826 =====Discovery Log Entry 0====== 00:41:54.826 trtype: tcp 00:41:54.826 adrfam: ipv4 00:41:54.826 subtype: current discovery subsystem 00:41:54.826 treq: not specified, sq flow control disable supported 00:41:54.826 portid: 1 00:41:54.826 trsvcid: 4420 00:41:54.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:54.826 traddr: 10.0.0.1 00:41:54.826 eflags: none 00:41:54.826 sectype: none 00:41:54.826 =====Discovery Log Entry 1====== 00:41:54.826 trtype: tcp 00:41:54.826 adrfam: ipv4 00:41:54.826 subtype: nvme subsystem 00:41:54.826 treq: not specified, sq flow control disable supported 00:41:54.826 portid: 1 00:41:54.826 trsvcid: 4420 00:41:54.826 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:54.826 traddr: 10.0.0.1 00:41:54.826 eflags: none 00:41:54.826 sectype: none 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:54.826 10:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:54.826 EAL: No free 2048 kB hugepages reported on node 1 00:41:58.118 Initializing NVMe Controllers 00:41:58.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:58.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:58.118 Initialization complete. Launching workers. 00:41:58.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64144, failed: 0 00:41:58.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 64144, failed to submit 0 00:41:58.118 success 0, unsuccess 64144, failed 0 00:41:58.118 10:58:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:58.118 10:58:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.118 EAL: No free 2048 kB hugepages reported on node 1 00:42:01.404 Initializing NVMe Controllers 00:42:01.404 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:01.404 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:01.404 Initialization complete. Launching workers. 00:42:01.404 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104159, failed: 0 00:42:01.404 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26250, failed to submit 77909 00:42:01.404 success 0, unsuccess 26250, failed 0 00:42:01.404 10:58:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.404 10:58:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.404 EAL: No free 2048 kB hugepages reported on node 1 00:42:03.939 Initializing NVMe Controllers 00:42:03.939 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:03.939 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:03.939 Initialization complete. Launching workers. 00:42:03.939 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100838, failed: 0 00:42:03.939 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25206, failed to submit 75632 00:42:03.939 success 0, unsuccess 25206, failed 0 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:03.939 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:04.198 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:04.198 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:04.198 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:04.198 10:58:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:08.395 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:08.395 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:09.777 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:09.777 00:42:09.777 real 0m20.099s 00:42:09.777 user 0m9.612s 00:42:09.777 sys 0m6.168s 00:42:09.777 10:58:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:09.777 10:58:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:09.777 ************************************ 00:42:09.777 END TEST kernel_target_abort 00:42:09.777 ************************************ 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:09.777 rmmod nvme_tcp 00:42:09.777 rmmod nvme_fabrics 00:42:09.777 rmmod nvme_keyring 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2302619 ']' 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2302619 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2302619 ']' 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2302619 00:42:09.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2302619) - No such process 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2302619 is not found' 00:42:09.777 Process with pid 2302619 is not found 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:09.777 10:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:13.978 Waiting for block devices as requested 00:42:13.978 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:13.978 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:14.237 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:14.237 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:14.497 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:14.497 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:14.497 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:14.497 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:14.758 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:14.758 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:14.758 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:14.758 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:14.758 10:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:17.302 10:58:22 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:17.302 00:42:17.302 real 0m52.412s 00:42:17.302 user 1m4.886s 00:42:17.302 sys 0m19.096s 00:42:17.302 10:58:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:17.302 10:58:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.302 ************************************ 00:42:17.302 END TEST nvmf_abort_qd_sizes 00:42:17.302 ************************************ 00:42:17.302 10:58:22 -- common/autotest_common.sh@1142 -- # return 0 00:42:17.302 10:58:22 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:17.302 10:58:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:17.302 10:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:17.302 10:58:22 -- common/autotest_common.sh@10 -- # set +x 00:42:17.302 ************************************ 00:42:17.302 START TEST keyring_file 00:42:17.302 ************************************ 00:42:17.302 10:58:22 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:17.302 * Looking for test storage... 00:42:17.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:17.302 10:58:22 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:17.302 10:58:22 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:17.302 10:58:22 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:17.302 10:58:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.302 10:58:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.302 10:58:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.302 10:58:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:17.302 10:58:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@47 -- # : 0 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:17.302 10:58:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nePVob0WsO 00:42:17.302 10:58:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:17.302 10:58:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nePVob0WsO 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nePVob0WsO 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nePVob0WsO 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ctnwc80KVX 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:17.303 10:58:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ctnwc80KVX 00:42:17.303 10:58:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ctnwc80KVX 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Ctnwc80KVX 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=2313050 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2313050 00:42:17.303 10:58:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2313050 ']' 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:17.303 10:58:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:17.303 [2024-07-22 10:58:22.923436] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:42:17.303 [2024-07-22 10:58:22.923513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313050 ] 00:42:17.303 EAL: No free 2048 kB hugepages reported on node 1 00:42:17.303 [2024-07-22 10:58:22.993862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.608 [2024-07-22 10:58:23.034808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:18.198 10:58:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:18.198 [2024-07-22 10:58:23.680380] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:18.198 null0 00:42:18.198 [2024-07-22 10:58:23.712426] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:18.198 [2024-07-22 10:58:23.712745] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:18.198 [2024-07-22 10:58:23.720435] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.198 10:58:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:18.198 [2024-07-22 10:58:23.736482] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:18.198 request: 00:42:18.198 { 00:42:18.198 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.198 "secure_channel": false, 00:42:18.198 "listen_address": { 00:42:18.198 "trtype": "tcp", 00:42:18.198 "traddr": "127.0.0.1", 00:42:18.198 "trsvcid": "4420" 00:42:18.198 }, 00:42:18.198 "method": "nvmf_subsystem_add_listener", 00:42:18.198 "req_id": 1 00:42:18.198 } 00:42:18.198 Got JSON-RPC error response 00:42:18.198 response: 00:42:18.198 { 00:42:18.198 "code": -32602, 00:42:18.198 "message": "Invalid parameters" 00:42:18.198 } 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:18.198 10:58:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=2313289 00:42:18.198 10:58:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2313289 /var/tmp/bperf.sock 00:42:18.198 10:58:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2313289 ']' 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:18.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:18.198 10:58:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:18.198 [2024-07-22 10:58:23.791670] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:42:18.198 [2024-07-22 10:58:23.791718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313289 ] 00:42:18.198 EAL: No free 2048 kB hugepages reported on node 1 00:42:18.198 [2024-07-22 10:58:23.873017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.464 [2024-07-22 10:58:23.903844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.035 10:58:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:19.035 10:58:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:19.035 10:58:24 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:19.035 10:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:19.035 10:58:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ctnwc80KVX 00:42:19.035 10:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ctnwc80KVX 00:42:19.295 10:58:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:42:19.295 10:58:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:42:19.295 10:58:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.295 10:58:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.295 10:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.556 10:58:25 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nePVob0WsO == \/\t\m\p\/\t\m\p\.\n\e\P\V\o\b\0\W\s\O ]] 00:42:19.556 10:58:25 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:42:19.556 10:58:25 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.556 10:58:25 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Ctnwc80KVX == \/\t\m\p\/\t\m\p\.\C\t\n\w\c\8\0\K\V\X ]] 00:42:19.556 10:58:25 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.556 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.817 10:58:25 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:42:19.817 10:58:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:42:19.817 10:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:19.817 10:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.817 10:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.817 10:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:19.817 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.076 10:58:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:20.076 10:58:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.076 [2024-07-22 10:58:25.658152] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:20.076 nvme0n1 00:42:20.076 10:58:25 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:20.076 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.336 10:58:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:42:20.336 10:58:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:42:20.336 10:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:20.336 10:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.336 10:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.336 10:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.336 10:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:20.596 10:58:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:42:20.596 10:58:26 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:20.596 Running I/O for 1 seconds... 00:42:21.539 00:42:21.539 Latency(us) 00:42:21.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.539 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:21.539 nvme0n1 : 1.01 13064.53 51.03 0.00 0.00 9765.25 5106.35 16930.13 00:42:21.539 =================================================================================================================== 00:42:21.539 Total : 13064.53 51.03 0.00 0.00 9765.25 5106.35 16930.13 00:42:21.539 0 00:42:21.539 10:58:27 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:21.539 10:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:21.797 10:58:27 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:42:21.798 10:58:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:21.798 10:58:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.798 10:58:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.798 10:58:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.798 10:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.056 10:58:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:42:22.056 10:58:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:22.056 10:58:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:22.056 10:58:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:22.056 10:58:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:22.056 10:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:22.316 [2024-07-22 10:58:27.825816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:22.316 [2024-07-22 10:58:27.826548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfd00 (107): Transport endpoint is not connected 00:42:22.316 [2024-07-22 10:58:27.827544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfd00 (9): Bad file descriptor 00:42:22.316 [2024-07-22 10:58:27.828546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:22.316 [2024-07-22 10:58:27.828552] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:22.316 [2024-07-22 10:58:27.828557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:22.316 request: 00:42:22.316 { 00:42:22.316 "name": "nvme0", 00:42:22.316 "trtype": "tcp", 00:42:22.316 "traddr": "127.0.0.1", 00:42:22.316 "adrfam": "ipv4", 00:42:22.316 "trsvcid": "4420", 00:42:22.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:22.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:22.316 "prchk_reftag": false, 00:42:22.316 "prchk_guard": false, 00:42:22.316 "hdgst": false, 00:42:22.316 "ddgst": false, 00:42:22.316 "psk": "key1", 00:42:22.316 "method": "bdev_nvme_attach_controller", 00:42:22.316 "req_id": 1 00:42:22.316 } 00:42:22.316 Got JSON-RPC error response 00:42:22.316 response: 00:42:22.316 { 00:42:22.316 "code": -5, 00:42:22.316 "message": "Input/output error" 00:42:22.316 } 00:42:22.316 10:58:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:22.316 10:58:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:22.316 10:58:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:22.316 10:58:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:22.316 10:58:27 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:42:22.316 10:58:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:22.316 10:58:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.316 10:58:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.316 10:58:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:22.316 10:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.316 10:58:28 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:42:22.316 10:58:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:42:22.316 10:58:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:22.316 10:58:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.316 10:58:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.316 10:58:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:22.316 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.575 10:58:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:22.575 10:58:28 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:42:22.575 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:22.835 10:58:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:42:22.835 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:22.835 10:58:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:42:22.835 10:58:28 keyring_file -- keyring/file.sh@77 -- # jq length 00:42:22.835 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:23.095 10:58:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:42:23.095 10:58:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nePVob0WsO 00:42:23.095 10:58:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.095 10:58:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:23.096 10:58:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.096 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.096 [2024-07-22 10:58:28.785359] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nePVob0WsO': 0100660 00:42:23.096 [2024-07-22 10:58:28.785376] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:23.096 request: 00:42:23.096 { 00:42:23.096 "name": "key0", 00:42:23.096 "path": "/tmp/tmp.nePVob0WsO", 00:42:23.096 "method": "keyring_file_add_key", 00:42:23.096 "req_id": 1 00:42:23.096 } 00:42:23.096 Got JSON-RPC error response 00:42:23.096 response: 00:42:23.096 { 00:42:23.096 "code": -1, 00:42:23.096 "message": "Operation not permitted" 00:42:23.096 } 00:42:23.355 10:58:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:23.355 10:58:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:23.355 10:58:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:23.355 10:58:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:23.355 10:58:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nePVob0WsO 00:42:23.355 10:58:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nePVob0WsO 00:42:23.356 10:58:28 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nePVob0WsO 00:42:23.356 10:58:28 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:23.356 10:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:23.616 10:58:29 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:42:23.616 10:58:29 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.616 10:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.616 [2024-07-22 10:58:29.266587] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nePVob0WsO': No such file or directory 00:42:23.616 [2024-07-22 10:58:29.266602] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:23.616 [2024-07-22 10:58:29.266618] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:23.616 [2024-07-22 10:58:29.266623] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:23.616 [2024-07-22 10:58:29.266628] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:23.616 request: 00:42:23.616 { 00:42:23.616 "name": "nvme0", 00:42:23.616 "trtype": "tcp", 00:42:23.616 "traddr": "127.0.0.1", 00:42:23.616 "adrfam": "ipv4", 00:42:23.616 "trsvcid": "4420", 00:42:23.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:23.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:23.616 "prchk_reftag": false, 00:42:23.616 "prchk_guard": false, 00:42:23.616 "hdgst": false, 00:42:23.616 "ddgst": false, 00:42:23.616 "psk": "key0", 00:42:23.616 "method": "bdev_nvme_attach_controller", 00:42:23.616 "req_id": 1 00:42:23.616 } 00:42:23.616 Got JSON-RPC error response 00:42:23.616 response: 00:42:23.616 { 00:42:23.616 "code": -19, 00:42:23.616 "message": "No such device" 00:42:23.616 } 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:23.616 10:58:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:23.616 10:58:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:42:23.616 10:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:23.876 10:58:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.A1uS6IoYXX 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:23.876 10:58:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.A1uS6IoYXX 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.A1uS6IoYXX 00:42:23.876 10:58:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.A1uS6IoYXX 00:42:23.876 10:58:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A1uS6IoYXX 00:42:23.876 10:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A1uS6IoYXX 00:42:24.136 10:58:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.136 10:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.396 nvme0n1 00:42:24.396 10:58:29 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:42:24.396 10:58:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.396 10:58:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.396 10:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.396 10:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.396 10:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.396 10:58:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:42:24.396 10:58:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:42:24.396 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:24.657 10:58:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:42:24.657 10:58:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.657 10:58:30 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:42:24.657 10:58:30 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.657 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.917 10:58:30 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:42:24.917 10:58:30 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:24.917 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:25.177 10:58:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:42:25.177 10:58:30 keyring_file -- keyring/file.sh@104 -- # jq length 00:42:25.177 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.177 10:58:30 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:42:25.177 10:58:30 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A1uS6IoYXX 00:42:25.177 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A1uS6IoYXX 00:42:25.437 10:58:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ctnwc80KVX 00:42:25.437 10:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ctnwc80KVX 00:42:25.697 10:58:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:25.697 10:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:25.697 nvme0n1 00:42:25.697 10:58:31 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:42:25.697 10:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:25.962 10:58:31 keyring_file -- keyring/file.sh@112 -- # config='{ 00:42:25.962 "subsystems": [ 00:42:25.962 { 00:42:25.962 "subsystem": "keyring", 00:42:25.962 "config": [ 00:42:25.962 { 00:42:25.962 "method": "keyring_file_add_key", 00:42:25.962 "params": { 00:42:25.962 "name": "key0", 00:42:25.962 "path": "/tmp/tmp.A1uS6IoYXX" 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "keyring_file_add_key", 00:42:25.962 "params": { 00:42:25.962 "name": "key1", 00:42:25.962 "path": "/tmp/tmp.Ctnwc80KVX" 00:42:25.962 } 00:42:25.962 } 00:42:25.962 ] 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "subsystem": "iobuf", 00:42:25.962 "config": [ 00:42:25.962 { 00:42:25.962 "method": "iobuf_set_options", 00:42:25.962 "params": { 00:42:25.962 "small_pool_count": 8192, 00:42:25.962 "large_pool_count": 1024, 00:42:25.962 "small_bufsize": 8192, 00:42:25.962 "large_bufsize": 135168 00:42:25.962 } 00:42:25.962 } 00:42:25.962 ] 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "subsystem": "sock", 00:42:25.962 "config": [ 00:42:25.962 { 00:42:25.962 "method": "sock_set_default_impl", 00:42:25.962 "params": { 00:42:25.962 "impl_name": "posix" 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "sock_impl_set_options", 00:42:25.962 "params": { 00:42:25.962 "impl_name": "ssl", 00:42:25.962 "recv_buf_size": 4096, 00:42:25.962 "send_buf_size": 4096, 00:42:25.962 "enable_recv_pipe": true, 00:42:25.962 "enable_quickack": false, 00:42:25.962 "enable_placement_id": 0, 00:42:25.962 "enable_zerocopy_send_server": true, 00:42:25.962 "enable_zerocopy_send_client": false, 00:42:25.962 "zerocopy_threshold": 0, 00:42:25.962 "tls_version": 0, 00:42:25.962 "enable_ktls": false 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "sock_impl_set_options", 00:42:25.962 "params": { 00:42:25.962 "impl_name": "posix", 00:42:25.962 "recv_buf_size": 2097152, 00:42:25.962 "send_buf_size": 2097152, 00:42:25.962 "enable_recv_pipe": true, 00:42:25.962 "enable_quickack": false, 00:42:25.962 "enable_placement_id": 0, 00:42:25.962 "enable_zerocopy_send_server": true, 00:42:25.962 "enable_zerocopy_send_client": false, 00:42:25.962 "zerocopy_threshold": 0, 00:42:25.962 "tls_version": 0, 00:42:25.962 "enable_ktls": false 00:42:25.962 } 00:42:25.962 } 00:42:25.962 ] 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "subsystem": "vmd", 00:42:25.962 "config": [] 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "subsystem": "accel", 00:42:25.962 "config": [ 00:42:25.962 { 00:42:25.962 "method": "accel_set_options", 00:42:25.962 "params": { 00:42:25.962 "small_cache_size": 128, 00:42:25.962 "large_cache_size": 16, 00:42:25.962 "task_count": 2048, 00:42:25.962 "sequence_count": 2048, 00:42:25.962 "buf_count": 2048 00:42:25.962 } 00:42:25.962 } 00:42:25.962 ] 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "subsystem": "bdev", 00:42:25.962 "config": [ 00:42:25.962 { 00:42:25.962 "method": "bdev_set_options", 00:42:25.962 "params": { 00:42:25.962 "bdev_io_pool_size": 65535, 00:42:25.962 "bdev_io_cache_size": 256, 00:42:25.962 "bdev_auto_examine": true, 00:42:25.962 "iobuf_small_cache_size": 128, 00:42:25.962 "iobuf_large_cache_size": 16 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "bdev_raid_set_options", 00:42:25.962 "params": { 00:42:25.962 "process_window_size_kb": 1024, 00:42:25.962 "process_max_bandwidth_mb_sec": 0 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "bdev_iscsi_set_options", 00:42:25.962 "params": { 00:42:25.962 "timeout_sec": 30 00:42:25.962 } 00:42:25.962 }, 00:42:25.962 { 00:42:25.962 "method": "bdev_nvme_set_options", 00:42:25.962 "params": { 00:42:25.962 "action_on_timeout": "none", 00:42:25.962 "timeout_us": 0, 00:42:25.963 "timeout_admin_us": 0, 00:42:25.963 "keep_alive_timeout_ms": 10000, 00:42:25.963 "arbitration_burst": 0, 00:42:25.963 "low_priority_weight": 0, 00:42:25.963 "medium_priority_weight": 0, 00:42:25.963 "high_priority_weight": 0, 00:42:25.963 "nvme_adminq_poll_period_us": 10000, 00:42:25.963 "nvme_ioq_poll_period_us": 0, 00:42:25.963 "io_queue_requests": 512, 00:42:25.963 "delay_cmd_submit": true, 00:42:25.963 "transport_retry_count": 4, 00:42:25.963 "bdev_retry_count": 3, 00:42:25.963 "transport_ack_timeout": 0, 00:42:25.963 "ctrlr_loss_timeout_sec": 0, 00:42:25.963 "reconnect_delay_sec": 0, 00:42:25.963 "fast_io_fail_timeout_sec": 0, 00:42:25.963 "disable_auto_failback": false, 00:42:25.963 "generate_uuids": false, 00:42:25.963 "transport_tos": 0, 00:42:25.963 "nvme_error_stat": false, 00:42:25.963 "rdma_srq_size": 0, 00:42:25.963 "io_path_stat": false, 00:42:25.963 "allow_accel_sequence": false, 00:42:25.963 "rdma_max_cq_size": 0, 00:42:25.963 "rdma_cm_event_timeout_ms": 0, 00:42:25.963 "dhchap_digests": [ 00:42:25.963 "sha256", 00:42:25.963 "sha384", 00:42:25.963 "sha512" 00:42:25.963 ], 00:42:25.963 "dhchap_dhgroups": [ 00:42:25.963 "null", 00:42:25.963 "ffdhe2048", 00:42:25.963 "ffdhe3072", 00:42:25.963 "ffdhe4096", 00:42:25.963 "ffdhe6144", 00:42:25.963 "ffdhe8192" 00:42:25.963 ] 00:42:25.963 } 00:42:25.963 }, 00:42:25.963 { 00:42:25.963 "method": "bdev_nvme_attach_controller", 00:42:25.963 "params": { 00:42:25.963 "name": "nvme0", 00:42:25.963 "trtype": "TCP", 00:42:25.963 "adrfam": "IPv4", 00:42:25.963 "traddr": "127.0.0.1", 00:42:25.963 "trsvcid": "4420", 00:42:25.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:25.963 "prchk_reftag": false, 00:42:25.963 "prchk_guard": false, 00:42:25.963 "ctrlr_loss_timeout_sec": 0, 00:42:25.963 "reconnect_delay_sec": 0, 00:42:25.963 "fast_io_fail_timeout_sec": 0, 00:42:25.963 "psk": "key0", 00:42:25.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:25.963 "hdgst": false, 00:42:25.963 "ddgst": false 00:42:25.963 } 00:42:25.963 }, 00:42:25.963 { 00:42:25.963 "method": "bdev_nvme_set_hotplug", 00:42:25.963 "params": { 00:42:25.963 "period_us": 100000, 00:42:25.963 "enable": false 00:42:25.963 } 00:42:25.963 }, 00:42:25.963 { 00:42:25.963 "method": "bdev_wait_for_examine" 00:42:25.963 } 00:42:25.963 ] 00:42:25.963 }, 00:42:25.963 { 00:42:25.963 "subsystem": "nbd", 00:42:25.963 "config": [] 00:42:25.963 } 00:42:25.963 ] 00:42:25.963 }' 00:42:25.963 10:58:31 keyring_file -- keyring/file.sh@114 -- # killprocess 2313289 00:42:25.963 10:58:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2313289 ']' 00:42:25.963 10:58:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2313289 00:42:25.963 10:58:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:25.963 10:58:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:25.963 10:58:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313289 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313289' 00:42:26.224 killing process with pid 2313289 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@967 -- # kill 2313289 00:42:26.224 Received shutdown signal, test time was about 1.000000 seconds 00:42:26.224 00:42:26.224 Latency(us) 00:42:26.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.224 =================================================================================================================== 00:42:26.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@972 -- # wait 2313289 00:42:26.224 10:58:31 keyring_file -- keyring/file.sh@117 -- # bperfpid=2314809 00:42:26.224 10:58:31 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2314809 /var/tmp/bperf.sock 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2314809 ']' 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:26.224 10:58:31 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:26.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:26.224 10:58:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:26.224 10:58:31 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:42:26.224 "subsystems": [ 00:42:26.224 { 00:42:26.224 "subsystem": "keyring", 00:42:26.224 "config": [ 00:42:26.224 { 00:42:26.224 "method": "keyring_file_add_key", 00:42:26.224 "params": { 00:42:26.224 "name": "key0", 00:42:26.224 "path": "/tmp/tmp.A1uS6IoYXX" 00:42:26.224 } 00:42:26.224 }, 00:42:26.224 { 00:42:26.224 "method": "keyring_file_add_key", 00:42:26.224 "params": { 00:42:26.224 "name": "key1", 00:42:26.224 "path": "/tmp/tmp.Ctnwc80KVX" 00:42:26.224 } 00:42:26.224 } 00:42:26.224 ] 00:42:26.224 }, 00:42:26.224 { 00:42:26.225 "subsystem": "iobuf", 00:42:26.225 "config": [ 00:42:26.225 { 00:42:26.225 "method": "iobuf_set_options", 00:42:26.225 "params": { 00:42:26.225 "small_pool_count": 8192, 00:42:26.225 "large_pool_count": 1024, 00:42:26.225 "small_bufsize": 8192, 00:42:26.225 "large_bufsize": 135168 00:42:26.225 } 00:42:26.225 } 00:42:26.225 ] 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "subsystem": "sock", 00:42:26.225 "config": [ 00:42:26.225 { 00:42:26.225 "method": "sock_set_default_impl", 00:42:26.225 "params": { 00:42:26.225 "impl_name": "posix" 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "sock_impl_set_options", 00:42:26.225 "params": { 00:42:26.225 "impl_name": "ssl", 00:42:26.225 "recv_buf_size": 4096, 00:42:26.225 "send_buf_size": 4096, 00:42:26.225 "enable_recv_pipe": true, 00:42:26.225 "enable_quickack": false, 00:42:26.225 "enable_placement_id": 0, 00:42:26.225 "enable_zerocopy_send_server": true, 00:42:26.225 "enable_zerocopy_send_client": false, 00:42:26.225 "zerocopy_threshold": 0, 00:42:26.225 "tls_version": 0, 00:42:26.225 "enable_ktls": false 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "sock_impl_set_options", 00:42:26.225 "params": { 00:42:26.225 "impl_name": "posix", 00:42:26.225 "recv_buf_size": 2097152, 00:42:26.225 "send_buf_size": 2097152, 00:42:26.225 "enable_recv_pipe": true, 00:42:26.225 "enable_quickack": false, 00:42:26.225 "enable_placement_id": 0, 00:42:26.225 "enable_zerocopy_send_server": true, 00:42:26.225 "enable_zerocopy_send_client": false, 00:42:26.225 "zerocopy_threshold": 0, 00:42:26.225 "tls_version": 0, 00:42:26.225 "enable_ktls": false 00:42:26.225 } 00:42:26.225 } 00:42:26.225 ] 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "subsystem": "vmd", 00:42:26.225 "config": [] 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "subsystem": "accel", 00:42:26.225 "config": [ 00:42:26.225 { 00:42:26.225 "method": "accel_set_options", 00:42:26.225 "params": { 00:42:26.225 "small_cache_size": 128, 00:42:26.225 "large_cache_size": 16, 00:42:26.225 "task_count": 2048, 00:42:26.225 "sequence_count": 2048, 00:42:26.225 "buf_count": 2048 00:42:26.225 } 00:42:26.225 } 00:42:26.225 ] 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "subsystem": "bdev", 00:42:26.225 "config": [ 00:42:26.225 { 00:42:26.225 "method": "bdev_set_options", 00:42:26.225 "params": { 00:42:26.225 "bdev_io_pool_size": 65535, 00:42:26.225 "bdev_io_cache_size": 256, 00:42:26.225 "bdev_auto_examine": true, 00:42:26.225 "iobuf_small_cache_size": 128, 00:42:26.225 "iobuf_large_cache_size": 16 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "bdev_raid_set_options", 00:42:26.225 "params": { 00:42:26.225 "process_window_size_kb": 1024, 00:42:26.225 "process_max_bandwidth_mb_sec": 0 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "bdev_iscsi_set_options", 00:42:26.225 "params": { 00:42:26.225 "timeout_sec": 30 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "bdev_nvme_set_options", 00:42:26.225 "params": { 00:42:26.225 "action_on_timeout": "none", 00:42:26.225 "timeout_us": 0, 00:42:26.225 "timeout_admin_us": 0, 00:42:26.225 "keep_alive_timeout_ms": 10000, 00:42:26.225 "arbitration_burst": 0, 00:42:26.225 "low_priority_weight": 0, 00:42:26.225 "medium_priority_weight": 0, 00:42:26.225 "high_priority_weight": 0, 00:42:26.225 "nvme_adminq_poll_period_us": 10000, 00:42:26.225 "nvme_ioq_poll_period_us": 0, 00:42:26.225 "io_queue_requests": 512, 00:42:26.225 "delay_cmd_submit": true, 00:42:26.225 "transport_retry_count": 4, 00:42:26.225 "bdev_retry_count": 3, 00:42:26.225 "transport_ack_timeout": 0, 00:42:26.225 "ctrlr_loss_timeout_sec": 0, 00:42:26.225 "reconnect_delay_sec": 0, 00:42:26.225 "fast_io_fail_timeout_sec": 0, 00:42:26.225 "disable_auto_failback": false, 00:42:26.225 "generate_uuids": false, 00:42:26.225 "transport_tos": 0, 00:42:26.225 "nvme_error_stat": false, 00:42:26.225 "rdma_srq_size": 0, 00:42:26.225 "io_path_stat": false, 00:42:26.225 "allow_accel_sequence": false, 00:42:26.225 "rdma_max_cq_size": 0, 00:42:26.225 "rdma_cm_event_timeout_ms": 0, 00:42:26.225 "dhchap_digests": [ 00:42:26.225 "sha256", 00:42:26.225 "sha384", 00:42:26.225 "sha512" 00:42:26.225 ], 00:42:26.225 "dhchap_dhgroups": [ 00:42:26.225 "null", 00:42:26.225 "ffdhe2048", 00:42:26.225 "ffdhe3072", 00:42:26.225 "ffdhe4096", 00:42:26.225 "ffdhe6144", 00:42:26.225 "ffdhe8192" 00:42:26.225 ] 00:42:26.225 } 00:42:26.225 }, 00:42:26.225 { 00:42:26.225 "method": "bdev_nvme_attach_controller", 00:42:26.225 "params": { 00:42:26.225 "name": "nvme0", 00:42:26.225 "trtype": "TCP", 00:42:26.225 "adrfam": "IPv4", 00:42:26.225 "traddr": "127.0.0.1", 00:42:26.225 "trsvcid": "4420", 00:42:26.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.225 "prchk_reftag": false, 00:42:26.225 "prchk_guard": false, 00:42:26.225 "ctrlr_loss_timeout_sec": 0, 00:42:26.225 "reconnect_delay_sec": 0, 00:42:26.225 "fast_io_fail_timeout_sec": 0, 00:42:26.226 "psk": "key0", 00:42:26.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:26.226 "hdgst": false, 00:42:26.226 "ddgst": false 00:42:26.226 } 00:42:26.226 }, 00:42:26.226 { 00:42:26.226 "method": "bdev_nvme_set_hotplug", 00:42:26.226 "params": { 00:42:26.226 "period_us": 100000, 00:42:26.226 "enable": false 00:42:26.226 } 00:42:26.226 }, 00:42:26.226 { 00:42:26.226 "method": "bdev_wait_for_examine" 00:42:26.226 } 00:42:26.226 ] 00:42:26.226 }, 00:42:26.226 { 00:42:26.226 "subsystem": "nbd", 00:42:26.226 "config": [] 00:42:26.226 } 00:42:26.226 ] 00:42:26.226 }' 00:42:26.226 [2024-07-22 10:58:31.819294] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:42:26.226 [2024-07-22 10:58:31.819352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314809 ] 00:42:26.226 EAL: No free 2048 kB hugepages reported on node 1 00:42:26.226 [2024-07-22 10:58:31.897847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.486 [2024-07-22 10:58:31.926337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:26.486 [2024-07-22 10:58:32.063165] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:27.056 10:58:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:27.056 10:58:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:27.056 10:58:32 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:42:27.056 10:58:32 keyring_file -- keyring/file.sh@120 -- # jq length 00:42:27.056 10:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.056 10:58:32 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:42:27.056 10:58:32 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:42:27.056 10:58:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.056 10:58:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:27.056 10:58:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.057 10:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.057 10:58:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.316 10:58:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:27.316 10:58:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:42:27.317 10:58:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:27.317 10:58:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.317 10:58:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.317 10:58:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:27.317 10:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:42:27.576 10:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.A1uS6IoYXX /tmp/tmp.Ctnwc80KVX 00:42:27.576 10:58:33 keyring_file -- keyring/file.sh@20 -- # killprocess 2314809 00:42:27.576 10:58:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2314809 ']' 00:42:27.576 10:58:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2314809 00:42:27.576 10:58:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:27.576 10:58:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:27.576 10:58:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314809 00:42:27.836 10:58:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:27.836 10:58:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:27.836 10:58:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314809' 00:42:27.836 killing process with pid 2314809 00:42:27.836 10:58:33 keyring_file -- common/autotest_common.sh@967 -- # kill 2314809 00:42:27.836 Received shutdown signal, test time was about 1.000000 seconds 00:42:27.836 00:42:27.837 Latency(us) 00:42:27.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.837 =================================================================================================================== 00:42:27.837 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@972 -- # wait 2314809 00:42:27.837 10:58:33 keyring_file -- keyring/file.sh@21 -- # killprocess 2313050 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2313050 ']' 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2313050 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313050 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313050' 00:42:27.837 killing process with pid 2313050 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@967 -- # kill 2313050 00:42:27.837 [2024-07-22 10:58:33.440961] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:42:27.837 10:58:33 keyring_file -- common/autotest_common.sh@972 -- # wait 2313050 00:42:28.097 00:42:28.097 real 0m11.033s 00:42:28.097 user 0m26.261s 00:42:28.097 sys 0m2.613s 00:42:28.097 10:58:33 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:28.097 10:58:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:28.097 ************************************ 00:42:28.097 END TEST keyring_file 00:42:28.097 ************************************ 00:42:28.097 10:58:33 -- common/autotest_common.sh@1142 -- # return 0 00:42:28.097 10:58:33 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:42:28.097 10:58:33 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:28.097 10:58:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:28.097 10:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:28.097 10:58:33 -- common/autotest_common.sh@10 -- # set +x 00:42:28.097 ************************************ 00:42:28.097 START TEST keyring_linux 00:42:28.097 ************************************ 00:42:28.097 10:58:33 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:28.366 * Looking for test storage... 00:42:28.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:28.366 10:58:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:28.366 10:58:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:28.366 10:58:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:28.366 10:58:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:28.366 10:58:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:28.366 10:58:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:28.366 10:58:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:28.366 10:58:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:28.366 /tmp/:spdk-test:key0 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:28.366 10:58:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:28.366 10:58:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:28.366 /tmp/:spdk-test:key1 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2315382 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2315382 00:42:28.366 10:58:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:28.366 10:58:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2315382 ']' 00:42:28.366 10:58:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.366 10:58:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:28.366 10:58:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.367 10:58:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:28.367 10:58:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:28.367 [2024-07-22 10:58:34.000547] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:42:28.367 [2024-07-22 10:58:34.000624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315382 ] 00:42:28.367 EAL: No free 2048 kB hugepages reported on node 1 00:42:28.627 [2024-07-22 10:58:34.072542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.627 [2024-07-22 10:58:34.112388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.197 [2024-07-22 10:58:34.784378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:29.197 null0 00:42:29.197 [2024-07-22 10:58:34.816423] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:29.197 [2024-07-22 10:58:34.816796] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:29.197 51645031 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:29.197 452124577 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2315535 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2315535 /var/tmp/bperf.sock 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2315535 ']' 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:29.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:29.197 10:58:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.197 10:58:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:29.197 [2024-07-22 10:58:34.890065] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 22.11.4 initialization... 00:42:29.197 [2024-07-22 10:58:34.890111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315535 ] 00:42:29.458 EAL: No free 2048 kB hugepages reported on node 1 00:42:29.458 [2024-07-22 10:58:34.968402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.458 [2024-07-22 10:58:34.996924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.030 10:58:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:30.030 10:58:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:30.030 10:58:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:30.030 10:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:30.291 10:58:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:30.291 10:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:30.552 10:58:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:30.552 10:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:30.552 [2024-07-22 10:58:36.130632] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:30.552 nvme0n1 00:42:30.552 10:58:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:30.552 10:58:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:30.552 10:58:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:30.552 10:58:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:30.552 10:58:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:30.552 10:58:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.814 10:58:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:30.814 10:58:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:30.814 10:58:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:30.814 10:58:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:30.814 10:58:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:30.814 10:58:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.814 10:58:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@25 -- # sn=51645031 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 51645031 == \5\1\6\4\5\0\3\1 ]] 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 51645031 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:31.075 10:58:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:31.075 Running I/O for 1 seconds... 00:42:32.018 00:42:32.018 Latency(us) 00:42:32.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.018 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:32.018 nvme0n1 : 1.01 13307.34 51.98 0.00 0.00 9572.95 7864.32 16056.32 00:42:32.018 =================================================================================================================== 00:42:32.018 Total : 13307.34 51.98 0.00 0.00 9572.95 7864.32 16056.32 00:42:32.018 0 00:42:32.018 10:58:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:32.018 10:58:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:32.278 10:58:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:32.279 10:58:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:32.279 10:58:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:32.279 10:58:37 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:42:32.279 10:58:37 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:32.279 10:58:37 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:32.539 10:58:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:32.539 10:58:37 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:32.539 10:58:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:32.539 10:58:37 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:32.539 10:58:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:32.539 [2024-07-22 10:58:38.127213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_re[2024-07-22 10:58:38.127216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03150 (107):ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:32.539 Transport endpoint is not connected 00:42:32.539 [2024-07-22 10:58:38.128211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03150 (9): Bad file descriptor 00:42:32.539 [2024-07-22 10:58:38.129213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.539 [2024-07-22 10:58:38.129219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:32.539 [2024-07-22 10:58:38.129225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.539 request: 00:42:32.539 { 00:42:32.539 "name": "nvme0", 00:42:32.539 "trtype": "tcp", 00:42:32.539 "traddr": "127.0.0.1", 00:42:32.539 "adrfam": "ipv4", 00:42:32.539 "trsvcid": "4420", 00:42:32.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:32.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:32.539 "prchk_reftag": false, 00:42:32.539 "prchk_guard": false, 00:42:32.539 "hdgst": false, 00:42:32.539 "ddgst": false, 00:42:32.539 "psk": ":spdk-test:key1", 00:42:32.539 "method": "bdev_nvme_attach_controller", 00:42:32.539 "req_id": 1 00:42:32.539 } 00:42:32.539 Got JSON-RPC error response 00:42:32.539 response: 00:42:32.539 { 00:42:32.539 "code": -5, 00:42:32.539 "message": "Input/output error" 00:42:32.539 } 00:42:32.539 10:58:38 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:42:32.539 10:58:38 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:32.539 10:58:38 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:32.539 10:58:38 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@33 -- # sn=51645031 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 51645031 00:42:32.539 1 links removed 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:32.539 10:58:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:32.540 10:58:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:32.540 10:58:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:32.540 10:58:38 keyring_linux -- keyring/linux.sh@33 -- # sn=452124577 00:42:32.540 10:58:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 452124577 00:42:32.540 1 links removed 00:42:32.540 10:58:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2315535 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2315535 ']' 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2315535 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2315535 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2315535' 00:42:32.540 killing process with pid 2315535 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@967 -- # kill 2315535 00:42:32.540 Received shutdown signal, test time was about 1.000000 seconds 00:42:32.540 00:42:32.540 Latency(us) 00:42:32.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.540 =================================================================================================================== 00:42:32.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:32.540 10:58:38 keyring_linux -- common/autotest_common.sh@972 -- # wait 2315535 00:42:32.799 10:58:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2315382 00:42:32.799 10:58:38 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2315382 ']' 00:42:32.799 10:58:38 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2315382 00:42:32.799 10:58:38 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:32.799 10:58:38 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:32.799 10:58:38 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2315382 00:42:32.800 10:58:38 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:32.800 10:58:38 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:32.800 10:58:38 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2315382' 00:42:32.800 killing process with pid 2315382 00:42:32.800 10:58:38 keyring_linux -- common/autotest_common.sh@967 -- # kill 2315382 00:42:32.800 10:58:38 keyring_linux -- common/autotest_common.sh@972 -- # wait 2315382 00:42:33.060 00:42:33.060 real 0m4.851s 00:42:33.060 user 0m8.602s 00:42:33.060 sys 0m1.460s 00:42:33.060 10:58:38 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:33.060 10:58:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:33.060 ************************************ 00:42:33.060 END TEST keyring_linux 00:42:33.060 ************************************ 00:42:33.060 10:58:38 -- common/autotest_common.sh@1142 -- # return 0 00:42:33.060 10:58:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:33.060 10:58:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:33.060 10:58:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:33.060 10:58:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:33.060 10:58:38 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:42:33.060 10:58:38 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:42:33.060 10:58:38 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:42:33.060 10:58:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:33.060 10:58:38 -- common/autotest_common.sh@10 -- # set +x 00:42:33.060 10:58:38 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:42:33.060 10:58:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:33.060 10:58:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:33.060 10:58:38 -- common/autotest_common.sh@10 -- # set +x 00:42:41.195 INFO: APP EXITING 00:42:41.195 INFO: killing all VMs 00:42:41.195 INFO: killing vhost app 00:42:41.195 INFO: EXIT DONE 00:42:44.506 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:44.506 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:44.506 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:48.711 Cleaning 00:42:48.711 Removing: /var/run/dpdk/spdk0/config 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:48.711 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:48.711 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:48.711 Removing: /var/run/dpdk/spdk1/config 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:48.711 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:48.711 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:48.711 Removing: /var/run/dpdk/spdk1/mp_socket 00:42:48.711 Removing: /var/run/dpdk/spdk2/config 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:48.711 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:48.711 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:48.711 Removing: /var/run/dpdk/spdk3/config 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:48.711 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:48.711 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:48.711 Removing: /var/run/dpdk/spdk4/config 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:48.711 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:48.711 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:48.711 Removing: /dev/shm/bdev_svc_trace.1 00:42:48.711 Removing: /dev/shm/nvmf_trace.0 00:42:48.711 Removing: /dev/shm/spdk_tgt_trace.pid1729770 00:42:48.711 Removing: /var/run/dpdk/spdk0 00:42:48.711 Removing: /var/run/dpdk/spdk1 00:42:48.711 Removing: /var/run/dpdk/spdk2 00:42:48.711 Removing: /var/run/dpdk/spdk3 00:42:48.711 Removing: /var/run/dpdk/spdk4 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1728123 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1729770 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1730316 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1731380 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1731704 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1732828 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1733096 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1733280 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1734347 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1735028 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1735325 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1735590 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1735984 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1736374 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1736727 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1736874 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1737144 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1738527 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1741768 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1742120 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1742378 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1742515 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1742902 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1743221 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1743598 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1743791 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1744041 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1744305 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1744484 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1744681 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1745120 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1745474 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1745861 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1746050 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1746227 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1746326 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1746675 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1746886 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1747070 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1747414 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1747761 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1748051 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1748250 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1748607 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1748956 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1749309 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1749514 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1749699 00:42:48.711 Removing: /var/run/dpdk/spdk_pid1750258 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1750799 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1751175 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1751345 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1751585 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1751937 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1752284 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1752580 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1752708 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1753117 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1758010 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1858724 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1864360 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1876539 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1883542 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1888912 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1889597 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1897548 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1905556 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1905558 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1906561 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1907567 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1908591 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1909240 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1909362 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1909593 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1909835 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1909913 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1910918 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1911921 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1912923 00:42:48.712 Removing: /var/run/dpdk/spdk_pid1913594 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1913604 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1913935 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1915363 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1916571 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1927159 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1927600 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1933061 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1940389 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1943554 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1957415 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1969124 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1971152 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1972311 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1994017 00:42:48.972 Removing: /var/run/dpdk/spdk_pid1999133 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2030583 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2036371 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2038257 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2040359 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2040579 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2040594 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2040638 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2041311 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2043433 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2044794 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2045335 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2047915 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2048670 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2049450 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2054863 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2061877 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2067544 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2114194 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2118681 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2126399 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2127870 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2129575 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2135283 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2141103 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2151054 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2151062 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2156654 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2156790 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2157122 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2157630 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2157769 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2158968 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2160910 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2162822 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2164817 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2166815 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2168761 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2176255 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2177047 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2178174 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2179427 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2185962 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2189549 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2196342 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2203147 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2213394 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2222209 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2222226 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2246357 00:42:48.972 Removing: /var/run/dpdk/spdk_pid2247069 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2247753 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2248433 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2249427 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2250123 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2250839 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2251545 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2256966 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2257308 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2264874 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2265075 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2267697 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2275375 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2275382 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2281856 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2284228 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2286425 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2287817 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2290246 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2292206 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2302765 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2303421 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2304083 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2307131 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2307551 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2308167 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2313050 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2313289 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2314809 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2315382 00:42:49.233 Removing: /var/run/dpdk/spdk_pid2315535 00:42:49.233 Clean 00:42:49.233 10:58:54 -- common/autotest_common.sh@1451 -- # return 0 00:42:49.233 10:58:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:42:49.233 10:58:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:49.233 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:42:49.233 10:58:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:42:49.233 10:58:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:49.233 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:42:49.494 10:58:54 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:49.494 10:58:54 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:49.494 10:58:54 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:49.494 10:58:54 -- spdk/autotest.sh@391 -- # hash lcov 00:42:49.494 10:58:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:42:49.494 10:58:54 -- spdk/autotest.sh@393 -- # hostname 00:42:49.494 10:58:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:49.494 geninfo: WARNING: invalid characters removed from testname! 00:43:16.139 10:59:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:16.710 10:59:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:19.253 10:59:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:20.632 10:59:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:22.010 10:59:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:23.916 10:59:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:25.299 10:59:30 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:25.299 10:59:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:25.299 10:59:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:25.299 10:59:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:25.299 10:59:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:25.299 10:59:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.299 10:59:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.299 10:59:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.299 10:59:30 -- paths/export.sh@5 -- $ export PATH 00:43:25.299 10:59:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.299 10:59:30 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:25.299 10:59:30 -- common/autobuild_common.sh@447 -- $ date +%s 00:43:25.299 10:59:30 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721638770.XXXXXX 00:43:25.299 10:59:30 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721638770.MN6ooJ 00:43:25.299 10:59:30 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:43:25.299 10:59:30 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:43:25.299 10:59:30 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:43:25.299 10:59:30 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:43:25.299 10:59:30 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:25.299 10:59:30 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:25.299 10:59:30 -- common/autobuild_common.sh@463 -- $ get_config_params 00:43:25.299 10:59:30 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:43:25.299 10:59:30 -- common/autotest_common.sh@10 -- $ set +x 00:43:25.299 10:59:30 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:43:25.299 10:59:30 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:43:25.299 10:59:30 -- pm/common@17 -- $ local monitor 00:43:25.299 10:59:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:25.299 10:59:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:25.299 10:59:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:25.299 10:59:30 -- pm/common@21 -- $ date +%s 00:43:25.299 10:59:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:25.299 10:59:30 -- pm/common@21 -- $ date +%s 00:43:25.299 10:59:30 -- pm/common@21 -- $ date +%s 00:43:25.299 10:59:30 -- pm/common@25 -- $ sleep 1 00:43:25.299 10:59:30 -- pm/common@21 -- $ date +%s 00:43:25.299 10:59:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721638770 00:43:25.299 10:59:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721638770 00:43:25.299 10:59:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721638770 00:43:25.299 10:59:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721638770 00:43:25.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721638770_collect-vmstat.pm.log 00:43:25.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721638770_collect-cpu-load.pm.log 00:43:25.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721638770_collect-cpu-temp.pm.log 00:43:25.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721638770_collect-bmc-pm.bmc.pm.log 00:43:26.243 10:59:31 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:43:26.243 10:59:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:43:26.243 10:59:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:26.243 10:59:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:43:26.243 10:59:31 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:43:26.243 10:59:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:43:26.243 10:59:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:43:26.243 10:59:31 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:26.243 10:59:31 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:26.243 10:59:31 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:26.504 10:59:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:43:26.504 10:59:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:26.504 10:59:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:26.504 10:59:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:26.504 10:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.504 10:59:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:26.504 10:59:31 -- pm/common@44 -- $ pid=2329174 00:43:26.504 10:59:31 -- pm/common@50 -- $ kill -TERM 2329174 00:43:26.504 10:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.504 10:59:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:26.504 10:59:31 -- pm/common@44 -- $ pid=2329175 00:43:26.504 10:59:31 -- pm/common@50 -- $ kill -TERM 2329175 00:43:26.504 10:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.504 10:59:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:26.504 10:59:31 -- pm/common@44 -- $ pid=2329177 00:43:26.504 10:59:31 -- pm/common@50 -- $ kill -TERM 2329177 00:43:26.504 10:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.504 10:59:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:26.504 10:59:31 -- pm/common@44 -- $ pid=2329200 00:43:26.504 10:59:31 -- pm/common@50 -- $ sudo -E kill -TERM 2329200 00:43:26.504 + [[ -n 1590454 ]] 00:43:26.504 + sudo kill 1590454 00:43:26.514 [Pipeline] } 00:43:26.532 [Pipeline] // stage 00:43:26.539 [Pipeline] } 00:43:26.557 [Pipeline] // timeout 00:43:26.562 [Pipeline] } 00:43:26.579 [Pipeline] // catchError 00:43:26.584 [Pipeline] } 00:43:26.602 [Pipeline] // wrap 00:43:26.608 [Pipeline] } 00:43:26.624 [Pipeline] // catchError 00:43:26.633 [Pipeline] stage 00:43:26.635 [Pipeline] { (Epilogue) 00:43:26.650 [Pipeline] catchError 00:43:26.652 [Pipeline] { 00:43:26.666 [Pipeline] echo 00:43:26.668 Cleanup processes 00:43:26.672 [Pipeline] sh 00:43:26.957 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:26.958 2329280 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:26.958 2329721 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:26.974 [Pipeline] sh 00:43:27.259 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:27.259 ++ grep -v 'sudo pgrep' 00:43:27.259 ++ awk '{print $1}' 00:43:27.259 + sudo kill -9 2329280 00:43:27.271 [Pipeline] sh 00:43:27.557 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:39.810 [Pipeline] sh 00:43:40.094 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:40.094 Artifacts sizes are good 00:43:40.109 [Pipeline] archiveArtifacts 00:43:40.115 Archiving artifacts 00:43:40.380 [Pipeline] sh 00:43:40.734 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:40.750 [Pipeline] cleanWs 00:43:40.759 [WS-CLEANUP] Deleting project workspace... 00:43:40.759 [WS-CLEANUP] Deferred wipeout is used... 00:43:40.765 [WS-CLEANUP] done 00:43:40.767 [Pipeline] } 00:43:40.785 [Pipeline] // catchError 00:43:40.797 [Pipeline] sh 00:43:41.081 + logger -p user.info -t JENKINS-CI 00:43:41.090 [Pipeline] } 00:43:41.105 [Pipeline] // stage 00:43:41.109 [Pipeline] } 00:43:41.127 [Pipeline] // node 00:43:41.131 [Pipeline] End of Pipeline 00:43:41.162 Finished: SUCCESS